Published 1 December 2020

The Future of AI: People and Data at The Aesthetica Film Festival

Stories are important. They help us make sense of the world. In particular, when constructing ideas about the future, stories of science fiction can help us to understand how we feel about future technologies by opening our minds to possibilities for the futures that we want as well as those that we don't. 

On Friday 6 November, DC Labs held a live broadcast event and online panel, The Future of AI: People and Data, as part of the Aesthetica Film Festival. Chaired by Professor Peter Cowling, Professor of AI at Queen Mary University of London, online panel members Dr Jenn Chubb, Rob Wilmot, Dr Amanda Rees, and Dr Olivia Belton met to discuss emerging themes following the screening of four short intriguing films investigating possible futures for AI and challenging us to think about what it means to be human. In this blog, we highlight some of the issues raised by the panel and provide links to the films so you can decide for yourself what role you see technology playing in the future and join in the conversation about AI Futures

We thought we would use AI to transcribe the recording of the panel. We tried various tools from a number of providers and the best results came from using Microsoft Azure Speech-To-Text. However, best is a relative term in this context as the accuracy of the transcription from Microsoft was quite poor. In fact, we ended up having to go through the whole recording manually to correct what the AI had done. However, this did bring some moments of levity, for instance how the AI interpreted our names: ‘Jenn Chubb’ coming out as ‘Gemini Man’ had us all in stitches! All told, a sobering reminder of how far AI has yet to go!

AI FUTURES SHORT FILMS

The panel chose four thought-provoking films: Vert, the Lonely Orbit, Bot and Forever.

Screenshot from Vert (UK, 2019). Dir. Kate Cox. With Nikki Amuka-Bird and Nick Frost.

Screenshot from Vert (UK, 2019). Dir. Kate Cox. With Nikki Amuka-Bird and Nick Frost.

First up is Vert. This short film in which characters Jeff and Emelia are given a virtual reality headset that shows one's "ideal self" for their wedding anniversary, explores the characters discovering a secret that could shift their relationship entirely. In Vert, AI senses your desires and creates them in a virtual world. We give some panel highlights in italics below:

How plausible and desirable would it be to have an AI that could sense our desires?

Is it plausible? I don't know and it's not relevant. Is it desirable? Now that's much, much, much, more interesting, isn’t it? [panellist Mandy]

Mandy described how it seemed evident in the film that whatever Jeff’s secret thoughts were, they were at some level apparently already known to his partner, Emelia. So whatever the AI was doing, it was only making more concrete what a human being already knew:   

Basically what an AI is able to do, or is being portrayed as doing, in that film is showing us a better version of ourselves. Showing us what we can ourselves do naturally.

Mandy compared this to having pets. If we think in it that way, then having an AI that senses our desires might well be desirable especially if it teaches us to empathise better with each other: 

The reason why we live with them is that they can sense our desires because they can sense how we feel and make us feel better. And that sense of the relationship between human and nonhuman, animal intelligence rather than artificial intelligence, is something that gives profound satisfaction - you know -  to human beings around the world.

Asked whether Mandy could see a world where we have AIs instead of dogs and cats - Mandy described the prevalence of robot pets in care homes and therapy settings and the benefits of them for the elderly, those in care and those with illness and disability.

Could AI replace pets? I'm going to say AI can't, but robots could. Because of that touch importance; because of that need for embodiment.

Jenn agreed and remarked how robot care pets and companions once viewed as rather bleak were now more regularly accepted following COVID-19 “because frankly, a robot can't catch the virus.” Jenn went on to suggest that the film was almost trying to show up a kind of mirror to humanity and that instead of presenting AI as dystopian, it was “actually trying to move away from that kind of dystopian fiction, trying to show hope and acceptance - if you like - through technologies. It's quite a unique story and perhaps showing the sort of frailty of humans and their emotional realities”. She remarked that it could have had very adverse consequences for Jeff had Emelia not been as accepting: utopia and dystopia are subjective concepts. Olivia responded that the film is in fact neither dystopian or utopian, rather a mix of both. Highlighting a tendency in discussions around science fiction to polarise AI narratives often to increase the appeal and hype around the subject. 

We tend to go towards these two poles when we talk about science fiction: the dystopia - the terrible society that has no upsides - and utopia, the perfect society that has no downsides, whereas obviously everything is always going to be a mix of both. On the one hand, this technology allows for exploring new avenues of identity. The dream of the Internet really is that you can be whoever you want. But also it is something that comes with a lot of sort of bittersweetness and pain.

Screenshot from Bot. Dir. Daniel Hoffmann. Germany, 2018, 20:34

Screenshot from Bot. Dir. Daniel Hoffmann. Germany, 2018, 20:34

Next up, BOT. In this film, we find AI protagonist, Leo, living alongside a human. The AI can understand the human’s wishes by analysing his Internet data and the protagonist simply clicks ‘accept’ without much thought. The film is set in the year 2021. Peter asked the panel, whether it was reasonable to imagine this level of AI sophistication, indeed whether it was already here, and whether that sophistication needed a general form of artificial intelligence. Even among experts, there is little consensus about the relative nearness of artificial general intelligence (also known as AGI or strong AI), though some say it is likely to be within decades and some even go further to postulate superintelligence. Rob described how some of this personal data analysis is already happening now; “your metadata is already being used by social networks to make money from directing interesting things to you that it knows you will like based on your posts, shares and likes on these networks and your web browser history.” 

Is it possible to create a conscious artificial being?

Since the publication of Frankenstein, it has been fascinating to ponder whether we might one day create artificial consciousness: “maybe not quite a person, but an entity that had its own goals, desires, thoughts and fears, and even maybe dreams. Several of the films [contain an] idea of an AI which has a conscious form of intelligence”.

In response, the idea that we don’t really know how we would assess whether an AI was conscious was posed by Olivia; “I have no proof that you are conscious, or anybody else other than myself is conscious. We just sort of assume based on the way we interact with each other”. Instead, she felt that Bot goes back to a sort of primal question that “underpins a lot of science fiction, going back to arguably the beginning with Frankenstein - that the things that we create are not necessarily any better or worse than we are. We are the creators of them.”  Peter responded that it is natural to just trust that another person or animal is conscious. He spoke of the importance of the role of emotions in AI. For Olivia, this would depend upon the range of domains we might use AI in. She posed to the panel the following thoughts:

We have intuition, we have instinct, we have emotions which drive decision making and drive our abilities. Whether or not it would be ethical to programme an AI with emotions, I think it's an actually quite loaded question. Would it be wrong to programme an AI that can feel fear or hatred or loneliness? Is that something that we really want to give another living sensing thing? Would it be worse to have a robot that genuinely felt sad or one that could just make you feel like it was feeling sad? Those are really complicated questions.

When Bot mimics the 19-year-old’s father, the man says “you can’t transfer a human to a hard drive”. Peter asked whether this was possible in principle and if yes, when. Panellist Rob stated that he believed we were “right here, right now” and went on to state how that this is of course taken to the extreme in film when it introduces ideas about human longevity: 

Maybe one day we’ll be regularly backing ourselves to make sure that in the future when we do lose our organic capabilities to remember or to function that this technology can be used to restore us later on, further down the line (Rob).

Interacting with AI

Rob described how such agents are now ubiquitous in our lives and in our homes. Conversational agents, like Alexa and Google Home for example, already can be quite ‘pervasive’ giving us ‘insights and nudges’. Such devices, Rob felt, could be helpful within medical care, helping to keep people connected and to be a friend to those who need it. Indeed Jenn described examples for the use of voice agents for people with disabilities and mobility issues. However, the films raise the question - how far can we trust AI? Are such devices always listening? The panel regularly suggested that there was a greater need for transparency in the development of AI technologies. Explaining to the user what happens to their data and being transparent about the kinds of decisions it might make about you. 

Rob noted how astounding it was to imagine that: “in its most basic form, the metadata collected around the web is being used to serve you ads to make money for large global corporations. And one of the most interesting points in that short film was that there is enough metadata out there to know enough about somebody in order to profile and target them, without any data protection laws being broken.

Importantly, the films signal a need for AI ethics and regulation, something which there is increasing momentum around at present in the AI world:

There is a need for ethics and proportional fair use in the way we define how AI interacts with us as a society. I see an issue where we’re seeing large corporations trying to steer and control this discussion by creating foundations; by getting involved at the highest levels; but governments being too slow to keep up with that (Rob).

Data harvesting

What we also see in Bot is just how easy it could be to harvest data about a person, and how users scroll through legalities of data and sharing, terms and conditions, agreeing blindly, without reading, to uses that we might not even know about yet. Panellist Olivia commented that ‘Bot’ was really interesting “in that it just sort of showed how there's no real way to opt-out of having your data being harvested.” Going on to say that “the amount of labour and hard work you have to put in just to avoid those companies getting your data is astronomical!”. Jenn agreed and highlighted the inevitable trade-offs we make:

In order to have a personalised experience, we do need to give a certain amount of ourselves away. But, of course, this takes it to the next level, and that’s an interesting trade-off to consider. What trade-offs are you willing to make?

Rob continued that it was easy to be very dystopian but that there were many benefits to AI:

I like the idea of AI being used for my benefit to be anticipating my needs.” Also describing benefits for mood and mental health.

It’s interesting to reflect on what trade-offs you are willing to make, and what trade offs you are actually making right now, when you click that ‘Accept’ button. 

Screenshot from Forever. Dir. Mitch McGlocklin. USA, 2020, 07:25

Screenshot from Forever. Dir. Mitch McGlocklin. USA, 2020, 07:25

This brings us onto the third film ‘Forever’. In this film, the protagonist is comforted by the idea that the harvesting of personal data will enable him to in some way ‘live on’. Peter describes how usually there's this notion that we live forever in the memories of loved ones, and in the stories of our lives. But in ‘Forever’ there's this notion that we live forever in our data. Mandy described how it was like ‘having a guardian angel with you’ and explored some of the “theological overtones to the language [used in] the descriptions... that's the nature of the deity - to know everything; to be omniscient and be able to see all and to know all.” 

Mandy commented that the protagonist was comforted that their data might help others in the future. Speaking of the film’s protagonist she says: 

I might have drunk myself to death; I might have abused myself and my body; I might not be able to cope fully with the life that I lead, but at least the information that's been gathered about me might give somebody else a chance of having a better life; having a better series of experiences. And it seems to me that this is a powerful part of the comfort that the protagonist was deriving from the idea of data collection.

Rob agreed that this kind of collection of our insights and information could be powerful and beneficial: 

I truly believe that this would be a wonderful thing. And it doesn't even have to be a singularity or generalised intelligence. It could just be an incredibly well put together system based on machine and reinforcement learning, with the input of data from a life well-lived.

There are other perspectives: Olivia and Jenn described how this might depend on what version of ‘you’ you have presented online or publicly. Olivia described how she was a little more skeptical about it: “I guess it depends on the extent that you think life is sort of reflected in data. I mean if somebody was trying to recreate me from my emails they’d think I spent half my life saying “okay, no problem. All the best, Olivia.” Jenn made a similar point that our data “don’t necessarily always give a fair picture of who you are and or account for change, or different circumstances”. For Mandy, this was the point though: “you don't get to curate who you are. Well, I don't, because I can't be bothered.” 

Over to you. What do you think about collection and processing of personal data? Would you want to live on through your data, what would your data say about you?

Screenshot from The Lonely Orbit. Dirs. Benjamin Morard & Frederic Siegel. Switzerland, 2019.

Screenshot from The Lonely Orbit. Dirs. Benjamin Morard & Frederic Siegel. Switzerland, 2019.

Finally, in The Lonely Orbit a person is distracted from his friends by his job and technology and feels lonely. As a metaphor in the movie, we see a satellite showing empathy, and even becoming lonely. Peter described how “there's this image in ‘The Lonely Orbit’ that I find so compelling; of these people who have these boring desk jobs, and it appears their desk jobs are very much around the management of social media and satellite networks. And on their coffee break, they’re scrolling through their mobile phones, not talking to each other.” He went on to ask the panellists about “the distraction economy?” Jenn remarked upon how moving the film was at least to her, in the context of the pandemic, where we are all craving some level of human connection. 

We’re seeing many people craving the material [world] and seeking out green spaces.  Some are even rejecting technology altogether. Rather than listening to a Spotify playlist, we want the tactile experience of putting a record on. I know I do (Jenn).

The attention economy

Jenn suggested that distraction is everywhere, it’s impossible to sit on a train without seeing a sea of phones and people looking down, trying to avoid eye contact. Peter referred to a Greek economist called Yanis Varoufakis who talks about “us existing in this world where we're all after kind of microwatts of pleasure. Little hits of dopamine. And it feels like the distraction economy is very effective at doing that”

Rob argued that the attention economy is quite a social problem and how he personally had turned all of his phone alerts off to avoid it as well as describing some socially useful technologies using similar ideas: “attention can be diverted using social network-like technologies in a way that makes it useful and doesn't make it just FOMO and having to have the last word”. Mandy welcomed the positive aspects of social media as described by Rob as a kind of ‘bonding tool’ and commented that the film presents viewers with “an image of social media as a distraction; and our particular kind of virtual connectedness as a problem, in that it diverts us from what's there.” 

For Mandy, the film had theological overtones including imagery of a “thunderbolt from heaven. The hand of God literally striking down the connections and the distractions”. Mandy wondered whether or not the filmmaker was attempting to provoke in us a response to social media as a ‘sinful’ tool. Olivia added that generationally, she had grown up with the internet and was socialised with it: 

The Internet has just made it so much easier for us to be in contact when especially this year it feels like we're all very distant from each other. So I'm critical of social media in a lot of ways. I think that polarisation and stuff that's happened is due to a large part to that. But there are definitely, I think, some significant advantages. 

In the Lonely Orbit, people on their coffee break from their screens scroll through mobile phone data rather than talk to each other. What do you do on your coffee break?

The power of stories

We all felt the power of these stories - they evoked many questions and as you can see from our panel discussion, a range of reactions. The panel closed by thinking about the power of stories. Reflecting on whether or not we might have any choice in AI becoming intelligent or not, Olivia suggested it keeps ‘a lot of people up all night’

Because the idea of this evil AI breakout is all about - like - if there was another thing that was as intelligent, or more intelligent than us, what would it think of us? What would it think of what humanity has done? And that's why stories are always about humans who are worried that something is going to show up and judge us for the horrible things we've done.

The panel discussed the role of positive narratives. “what if the first narrative of the first story we were ever told about AI was something like ‘Wall-E’. Where you've got this cute little robot going around sweeping up the planet after the mess we've made of it”, Jenn added - wondering about children growing up today and what stories they will relate to and continue to tell about the future of AI. 

Olivia suggested that up to this point, a lot of thinking about AI and data has been really speculative: 

Things like ‘2001’ which was really at the beginning of the computer revolution; stuff like minority report. But these films felt very lived-in; very sort of considered. How we actually live with technology now, and what that might look like in the future. Or even now with ‘Forever’, which is much more set in the present. 

What next for AI futures? 

These four fascinating films showed the trade-offs we make when living with AI - they also provoked speculation about the future. 

Our panellists are hopeful for a future where AI and technology are aligned collectively with good governance, society and government, mindful of the trade-offs we make when we interact with technology. Can AI make us more human and help us to gain empathy and to deal with issues of diversity? We think so, if we choose well.

It's important that as humans we have clarity, or gain clarity, as to what we really want. What it means to be a human. If we want an AI to serve us as humanity, I think we're going to have to have a pretty good idea individually and collectively about what it means to be a human. And, I believe that AI can help with all of that (Peter).

With thanks to the filmmakers and to the Aesthetica Film Festival. 

To join the conversation about AI Futures follow @labsofDC #AIFutures 

Meet the AI Futures Panel

A table showing photos, names and titles of the AI Futures panel members