‘Alexa? Please go away’. AI in the home: views from AI thought leaders
A thought piece by Jenn Chubb, Darren Reed and Peter Cowling
It is well known that Facebook CEO Mark Zuckerburg tapes over his own webcam - a very deliberate step from one of the world’s tech giants to protect his own privacy. At the same time, Bill Gates, Steve Jobs and other leaders limit screen time and tech use in their own children. The headlines increasingly tell us that Silicon Valley parents are raising their children to be tech-free, or at least tech-rationed.
What is it that they know that the customers, who have made them billionaires, don’t? Simply put, tech experts and developers seem more aware of the social and developmental issues associated with using the technology they are creating. While a fast-growing community of scholars and commentators develop an ever-deeper understanding of the social and ethical implications of AI and digital technologies, development is fast-paced.
But what about those researching AI?
Notwithstanding the growing corpus of academic literature on the subject of the future, risk, privacy, surveillance and policing of AI, little research actually describes how experts use digital technology and AI in the home. An understanding of the behaviour of those in possession of the best possible information about harms and benefits might help to break through misinformation and marketing hype, as AI becomes ubiquitous: in our smartphones, behind the scenes in our online shopping recommendations and in our games.
This piece attempts to do so and presents a dominant theme from research conducted in Summer 2020 for a project on AI Futures. This work involved interviews with 25 AI Futures scholars and thought leaders, from a range of disciplines across the UK, Europe, Canada and the US. These semi-structured interviews were ostensibly to investigate their views on the future of AI in a broader context, though we also asked about the use of AI in the home. Our interviews show that it is common for scholars to almost unanimously reject the everyday use of AI in the home, particularly conversational AI agents such as Alexa and Google Home. The principal reasons given are transparency of data use, privacy, and surveillance. At a larger scale our interviewees cited problems with the development of social skills in younger generations and an understanding of how to fit into and create an inclusive, connected and creative society.
Overwhelmingly, scholars wanted a kind of democracy in AI - to protect privacy, to be transparent and make visible the kinds of power relations that are behind the technology. When experts presented ways in which they felt AI might benefit their individual lives most felt that any real benefit from AI would come at the cost of personal privacy and most were dubious that more AI would lead to more wellbeing. They describe how AI research and development might focus on building connection, communication and learning.
In an ideal world, some described how AI could be used to free up time to focus on things unrelated to work and technology, giving voice to those who are silenced, helping to inspire in having those ‘a-ha moments’, reducing complexity, enhancing creativity, enhancing individual’s mental and physical health and crucially augmenting human intelligence rather than trying to replace or replicate it. Most feel that relationships with AI agents cannot substitute human relationships and must instead reflect human values, codesigned in full view of the social context and domain in which it operates. Instead, our experts call for a focus on trust, safety, privacy and transparency. They consider AI is often not the best solution for the people involved when it is used to solve a problem and suggest that “no one is asked what kind of a future we really want, or who is included in that we.” To come close to an answer to that question, scholars promoted the need for better stories about AI:
Narrative really, really matters. It’s incredibly influential, it is at every age, and especially for kids, but at every age. And I don’t know exactly what the path is to this, but I think we need to be thinking about how to diversify the voices that are telling the stories about our future.
The following themes emerged from our research about the everyday attitudes and behaviour of those leading research about AI futures today.
A Rejection of Tech in the home?
Through inductive thematic analysis of the data, we found that almost all of our panel of experts on AI futures revealed, without prompting, a reluctance or a complete rejection of many forms of explicit AI tools and technologies in their homes on privacy grounds, citing the need for greater transparency and what many referred to as ‘design justice’. Some participants even expressed a preference for being completely “off-grid”:
You know, my phone for making normal phone calls is a normal kind of mobile phone; I don’t have a smartphone. I have a tablet which I use for doing this kind of work but I like to be off-grid.
We don’t have a home assistive device nor any kind of robot in any significant way.
I choose not to have my home very smart for a couple of reasons. I don’t think it’s necessary, I think it introduces risk, you know, the smarter things are, the more likely they can be hacked...the sort of gut cost benefit analysis tells me that it’s not worth it.
Despite this, most stated that they did enjoy technology and that they were not anti-tech:
I want to be on a kind of nodding acquaintance with modern technology, I’m not someone who always wants to go with the next thing.
I don’t want to be a power user but I like new interfaces, I like new ways of engaging with the technology.
All participants shared grave concerns about the future role of AI technologies in our everyday lives. Particular risks were associated with voice agents and social media, particularly for younger generations.
The most stand out theme was that almost all participants stated that they did not have a voice assistant in their home. The few that did stated they used it sparingly and for practical purposes, e.g. settling dinner table arguments, cooking timers, etc:
I have an Alexa at home [...]. The only useful thing that I found that I needed from it was the shopping list facility [...]. My son has one – he largely uses that for setting timers for cooking, it is actually quite good for that, too.
Most however did not welcome voice AI into their homes, out of 25 interviews almost all stated that they did not have an Alexa or similar on data privacy grounds:
I don’t have any Alexa or stuff on purpose. It was a decision made very, very carefully.
I don’t have Siri, I don’t have anything like that, I never would, Alexa or anything like that. I don’t have a smartphone. I would never have a smartphone until everybody knows how to treat that data with respect.
Some participants deactivated Siri and voice AI on their phones. These kinds of moves were often referred to in collective statements about families or households, personal policies:
They’re deactivated on our devices but I know that actually deactivated in those things does not mean that our data, including audio feeds are not in some way also being analysed by the tech companies in terms of thinking about profiles and so on...
I can’t stand them, I’ve switched off every single voice assistant on my computers, phones, everything and every time they sort of wake up because for some reason they hear something and they think I want to turn them on, I just turn them off again, …
Many felt that the level of personal data required for effective personalisation of an intelligent Voice AI was simply not worth it:
For an intelligent assistant to actually offer assistance it needs to know you well and to be aware, to have situational awareness, understand your goals, you know, the things you tend to screw up on so it means that you really have to place yourself under a certain level of surveillance by your AI assistant and that I dislike intensely. So, I don’t have an Alexa or… I would never allow one in my house until it can operate entirely locally.
Social Media Use
When asked more generally about AI in the home, most participants also expressed reluctance to use social media platforms including the immediate availability of content, potential for coercion or manipulation. Citing privacy and concerns that social media is, as one participant put it, “the prime location for political propaganda”:
AI can be used for the worst possible things, right? The way Facebook was used by the Trump campaign in 2016, is absolutely, you know, is a case in point. Facebook’s refusal at the moment to flag untruths as untruths, when they come from the President of the United States!?
Platforms like Facebook and Twitter were frequently given as ones that our experts avoided in part or whole. Common concerns included business models, privacy, Donald Trump (the interviews predated Biden’s election), and influencing democratic outcomes by organisations such as Cambridge Analytica. Those that did use social media, appeared to express shame or regret “I use Facebook, I’m a bit embarrassed about it”. Most disliked it “I am on social media but I’m not a very sophisticated user and I’ve mostly grown to dislike them over time”.
Instead, some stated that they had come off and rejected social media altogether.
I’ve closed my Facebook account and erased all my data from Facebook because I can’t stand that sodding company.
I have left Facebook over concerns about interference with both my personal privacy and with democracy.
Many felt that social media was potentially damaging for wellbeing and happiness and that their purpose were problematic:
I’m not sure that I expect social media to sort of play a role in looking after our wellbeing. It feeds into cycles of depression, an obsession that people go through. It feeds into echo chambers that we use to not learn anything.
Many felt that social media plays a negative role in the wellbeing of its users, with ‘information bubbles’ which reinforce particular world views, its role in political scandals and manipulative qualities including a lack of transparency.
Almost all participants expressed particular concern over the influence of voice AI technology on younger generations. These issues related mostly to civility and what children might be learning about gender, acceptable behaviour, and society. Amazon’s Alexa female voice as subservient is one such example:
We don’t have an Alexa but my stepdaughter and her family have one and actually I don’t, you know, when I’m there I don’t really like to hear the children say, “Alexa, do this. Alexa, do that”. And this female voice in that case kind of responds without any kind of fluster or never a challenging back or never saying, “Say please”. You know?
Several raised concerns about child development where there were worries about how these devices might interfere with face-to-face interactions and play that children need to thrive in life:
Humans don't always respond as Alexa does.
Particular issues were raised with respect to safeguarding and safety of younger generations:
You don’t want to be trolled, you don’t want to be click baited. You don’t want to be harassed online. You need to protect children and digital rights and privacy.
One participant felt that as a society there is a “duty that we are signing up for for children, that I don’t know - we need to make it somehow beneficial to their wellbeing or not use it at all.”
AI in the Home: What’s the Good News?
Our participants were keen to stress more positive everyday uses of AI included the potential for connecting generations (teaching young children), simplifying practical tasks (e.g. setting timers, settling dinner table arguments and finding jokes) and functional navigation or web searching.
Additionally, recommendation functions for leisure purposes such as music, were regarded more favourably, whereas Facebook content, for instance, including paid advertising was problematic. Instead, overwhelmingly, scholars expressed that to consider what they might personally want from AI was, at this point - an indulgence from a somewhat place of relative privilege. Instead most suggested that any question we ask about the future of AI, must be inclusive of all aspects of society, sensitive to all social backgrounds, ideologies and sentiment.
We find that experts generally had mixed feelings about AI both today and in the future which may be attributed to their positions as leaders at the forefront of studying and developing AI technologies and risk. This piece provides stark provocation to consider the behaviour of those at the forefront of technology who indicate the need to take these risks seriously. Whilst technology has been a life line and a saviour during the COVID- 19 pandemic, we might want to consider, how far we continue that in our ongoing day to day lives:
My experience of the pandemic brought me to, you know, nurture some kind of real rejection for digital interaction. I think people have remembered what it is like to spend time with their children, that is both good and bad, and in spaces, again both good and bad for green spaces particularly.
Before we embarked on this research, members of the team had a variety of voice-activated devices that embodied AI around their homes. Our voice-activated devices are by now largely gone, and our lives may be a little better for that.