AI Futures update: Some emerging themes and ponderings, by Dr Jenn Chubb
During the last three months, I have conducted interviews with research and thought leaders across multiple disciplines ranging from Computer Science, Engineering and Education to Philosophy and the Social Sciences in order to build a picture of the current views about the future of AI.
Interviewees had expertise in considering the future of AI and where it might be taking us - if we choose carefully - and we had some fascinating discussions on the risks and the benefits presented by AI across a range of domains. As an example of the kinds of themes emerging during analysis, experts talked at length about a range of issues including; dominant narratives about AI, the role of the media and hype, likelihood of Artificial General Intelligence (AGI), existential risk, safety, creativity, responsible development of AI and ethics, consciousness, robotics, representation and equality. I will broadly and briefly describe some of these here before going into greater depth in future updates.
Central to our research was questioning what ‘we’ actually want from AI - a question which is increasingly raised by some of AI’s leading minds.
23 semi-structured interviews were carried out online in the Summer of 2020 with scholarly experts and thought-leaders known for their work on the future of AI. Broadly we were interested in how we nurture the creativity required to build a better future with AI.
We asked experts to define AI and to explore notions of superintelligence and how one might come to making future predictions concerning AGI. We asked participants to consider the commonly told stories about AI, who features in them and the extent to which they might differ from reality. We asked experts then to consider a range of alternative narratives concerning the use of AI particularly with respect to work, home and leisure. In doing so, we asked what a future with AI felt like to them, whether they were positive or negative about that future, and in what areas. Looking ahead, we asked experts to consider the role AI might play in our individual lives and how we might build a beneficial future with AI.
Our main finding is that experts feel that there is a need for more nuanced stories and narratives about AI and that many dominant narratives such as embodied AI and superintelligence, for example, often serve to distract from the important conversations that need to happen now about AI:
“Futures like superintelligence capture a lot of our attention but there are other ones and we need to be paying more attention to them.”
Experts claim there is a ‘story crisis’ in AI and nuanced narratives and images are required in order to work toward more understanding about AI and its uses. The dominant public narratives described tended to concern superintelligence, anthropomorphised robots and job losses. Experts repeatedly suggested there was a need for more narratives around AI, inclusive of the middle ground and more positive.
“There is a general lack of awareness about what happens in terms of how everything gets into our home and how civilisation functions and most of what AI does, it does it in those spaces, and so I think those are actually fairly, I mean in some sense, they are boring stories but in another sense it’s the way AI gets woven into how our civilisation and is run - these are just completely missing narratives.”
Experts feel that public and general perspectives on AI are often polarised and that what we have today in terms of AI - what most participants described as ‘narrow’ or ‘dumb’ AI - was all too often conflated with the ‘spectacular’ and with notions of superintelligence. Often this was seen as reflective of trends in science fiction but also coming from some prominent work in the domain of the philosophy of science around existential risk. This resulted in expressed frustration and concern particularly those from the engineering, maths and computing disciplines, who felt that these debates were not always realistic when there were more ‘pressing issues’. Others felt that speculative ethics about general intelligence was simply too complex and often unknown.
The history of these narratives was discussed. Experts talked in particular about how stories, science fiction and the media, play no small role in over-hyping AI and proliferating these narratives:
"Nobody ever lost their job from overhyping AI and robots".
It was widely acknowledged that stories about Hal and 2001, a Space Odyssey and the Terminator may have played a part in the reason why the public consciousness leans toward a notion of AI as a potential threat. The second most dominant narrative perceived by our participants concerned the loss of jobs through AI and automation.
The power of storytelling about AI was therefore noted and experts felt that there was a need to gather nuanced information and scenarios about the future with AI in order to stimulate collective visions more reflective of the realities. Interviewees remarked how current portrayals of AI often diverted public perception from reality in quite an unhelpful way. Our experts urge that we need to focus on and gather niche and nuanced stories grounded less in the spectacular and more in empirics.
“Amazing things happen when enough people believe in a good story”. One participant remarked.
“Imagine if Wall-E was the first public narrative about AI and not 2001: A Space Odyssey” another participant exclaimed!
Perhaps, the younger generation will have a different perspective about the future of AI. These findings resonate with the work of the Leverhulme Centre for Future intelligence and the Royal Society and their work on AI Narratives.
Beyond fiction, AI is seen to present a paradox. On the one hand AI is often presented as a silver bullet in solving all problems, on the other, the very origin of moral panic because of issues concerning surveillance, privacy and control. Attention, experts suggest, must be paid to the ‘entanglement’ of both of these polls.
Of course the effects of AI can be positive, exciting and fun, but also, our experts reported a great deal of worry, wariness and reflected heavily on the ethical, philosophical, economic and societal implications of AI.
Interestingly, out of all of the experts we interviewed, almost all revealed without provocation that they were mostly reluctant to welcome AI into their homes, often rejecting voice assistive technology like Alexa or social media platforms like Facebook, citing privacy concerns.
AI, politics and power
“I think maybe it’s less about the technology itself but more about the politics of the technology.”
Experts regularly described how society currently faces multiple crises and political change. Algorithmic and political scandals concerning big data and movements relating to inequalities such as Black Lives Matter provide a backdrop to many of our discussions. Not least the global crises we face in climate change and of course, Covid-19. Here, AI's limitations and strengths have come into sharp focus whilst ast the same time a return to materiality seems preferred by our experts.
Not surprisingly, AI is considered as political - experts warned that one cannot dream up a future blind to political power, structures and social conditions. The structures of power and equality emerged as key themes from the research and despite attempts to democratise AI and technology, experts felt there was much to be done with respect to fairness and equality. We asked participants in light of this how we nurture the creativity required to build a future with AI that would benefit individuals and ultimately society as a whole.
Overall, experts had mixed feelings about AI both today and in the future particularly citing ethical concerns. In order to build AI for a beneficial future, issues such as; privacy, algorithmic bias, fairness, trust, democracy and governance, must be addressed today and reflected on an ongoing, continual and rigorous basis, experts claimed.
Indeed analysis shows that despite a concerted effort in the development of ethical principles and frameworks to help guide individuals and industry in developing ethical AI, experts continued to raise ‘ethical roadblocks’, many feeling that litigation and governance of AI is falling short. It has been recently acknowledged that the ‘term ethical AI is finally starting to mean something’ and our participants provided a range of views to support this, though there is still work to be done. Bias and algorithmic injustice emerged as the most discussed issue in our interviews followed by responsibility and trust. Experts described the issues of value alignment and detailed the values the community ought to foster when designing and implementing AI and how often this might be domain/culturally relative. Such values, they claimed, must include; accountability, consequences for bad actors, explainability of systems, fairness, alignment to human goals and values, non-discriminative, protect our privacy, have robust standards and reporting, and be safe, transparent and trustworthy. The research aims to contribute to the growing movement of work in the area of ethics and the Responsible Research and Innovation (RRI) agenda.
Holding the microphone up to different voices
Experts suggest that we must stimulate collective visions from a range of critical and diverse voices. They warned that the very communities propagating these narratives and those designing technology are not always representative of diverse groups and most frequently represent elite, white and often privileged communities. As such, AI could be said to benefit most those who perhaps need it the least. We hear so often about there being a 'need' for new technology in this or that - but will this make our lives any better? Does anyone pause to reflect on what we actually want? What will actually make humanity thrive?
We asked our participants 'What do we want from AI?' For a better, fairer society, all agreed the first consideration must be who is included in that ‘We’. More updates are forthcoming.
New visions for a future with AI
Whilst our research reveals many benefits, the more nuanced picture about where AI might take us, is less known. The tendency for the media to sell tales about AI and its implications for big societal issues in healthcare, Higher Education, food production and climate change might distract from the ways in which AI can help us on an individual, every day level to have enjoyment and find delight and happiness. In future blogs, we will discuss our findings on what the community - the AI ecosystem inclusive of scholars, decision makers, technologists and the public - might need to do in terms of research and dialogue with practice to produce new and different stories about the way AI can be mobilised to enable humanity to thrive.
Participants provided examples of more nuanced futures which we will present in forthcoming papers and blogs including how AI might help with ‘A-Ha’ moments, act as a ‘friend’ or even a ‘personal trainer’ or ‘bridge’ and ‘matchmake’ between individuals and communities. They described a road map as to how we get there, which we will share in future updates. Thematic analysis is currently ongoing.
We are currently moving into the next phase of interviewing AI policy makers so watch this space for an update on emerging themes from this group. Following which, we will interview technologists.
Come back for the results, soon!