An introduction to Pepper: narratives of creepy, killer robots, by Jenn Chubb
This is Pepper – a semi-humanoid robot.
Pepper can recognise faces and respond to basic human commands. The purpose of Pepper is mainly to engage with people through voice activation and through a touch screen. Pepper is used in schools, hospitals and businesses to welcome people and inform people. Pepper also gets used for research purposes about human robot interaction.
Why is Pepper important? Simply, Pepper is an example of a very basic embodied form of AI that can be used to help us understand perceptions and debunk myths about AI and I was introduced to Pepper for research and public engagement purposes.
When people talk about AI, many immediately conflate it with robots, probably because it is one of the more exciting ways of thinking about AI. This is our first problem. Of course, embodied forms of AI are part of the story, but most AI is not used to control robots. Nor is it meaningful to talk about building ‘an AI’ with superintelligence. We will come on to this in a moment.
Maybe Pepper could be used to challenge perceptions or is it a harder job to do that than we might first think? I met Pepper in the Computer Science Lab at the University of York. Pepper’s head was down and on command, Pepper was ‘woken up’ by a trusted ‘owner’ from the department. The head lifted up and eyes began to flash.
Creepy? Well, it depends who you ask.
Excited to interact with Pepper, on first sight I instinctively assigned female pronouns to ‘her’ – perhaps it was her ‘cute’ face or her slightly high pitched voice but something naturally led me to treat the robot like a friend, like a person. The fact that I assigned Pepper a gender is interesting as I have since been informed that ‘she’ is a ‘he’… Robots do not have a gender, but we naturally attempt to build relationships when we try to communicate and make connections.
Despite my willingness to interact with Pepper, many of my colleagues were not so keen.
You see, Pepper has human-like qualities. Pepper’s hands can move and as it ‘thinks’ (the eyes flash a different colour when thinking) the hands can move and fist shapes can be formed. Pepper’s hands are even a different texture to mimic a kind of ‘skin’. Pepper was on a stand when I first saw it, but I was told it could come off this and move around - even dance, if programmed to do so. Pepper is about 4 foot tall. I wondered how I would feel if it came off the stand and whether that would affect my response if I saw it moving. My colleague commented that such human-like features bothered her regardless of motion and she referred to the robot as #scary #creepy and #evil. This is reminiscent of the uncanny valley effect - a hypothesized relationship between the degree of an object's resemblance to a human being and the emotional response to such an object. Perhaps Pepper evoked feelings of eeriness in the same way that some people feel about insects - they don't think they're going to take over, they just don't like them? Perhaps these feelings have also been confounded by portrayals in science fiction?
I was intrigued to think about this a little more and wondered about how others would feel about meeting Pepper, people of different ages and backgrounds, for example. This kind of human- robot interaction - a field of study dedicated to understanding, designing, and evaluating robotic systems for use by or with humans - is surely part of our understanding of a future with AI. Perhaps robots could even be used to confront, and begin changing stereotypes?
Myths: Science Fiction, Control and Superintelligence
Whilst we lightly joke about how scary or not these kinds of robots are, the reality is that Pepper is actually rather limited and shows no sign of ‘rising up’ against us any time soon. But still the mix of reactions towards Pepper and other forms of robots, particularly those with human-like features, is stark, extreme even.
Beyond the Uncanny Valley, people inherently fear a loss of control and conflate robots with notions of super-intelligence. Research shows us that perceptions of AI can be dichotomous in terms of strength of feeling, i.e. for every positive, there is a counterpoint view. Just this short interaction with Pepper reinforced many of these findings.
For instance, on the one hand there I was, quite excited, amused and interested by this thing, on the other, my colleague expressed visceral fear and really disliked Pepper. I heard stories about how some individuals would walk out of a room when the robot is in there. Some of this fear is probably driven by commonly held public perceptions of portrayals of AI and robots in film and on TV, but some of it comes from a deeper sense of perceived existential threat to our humanity and they are not alone in this view.
Where fear exists, it seems this relates closely to a perceived threat towards an individual’s control and autonomy in the world and/ or views that intelligent machines will surpass that of human intelligence and harm ‘us’. With respect to Pepper, this is largely because people might assume Pepper is a General or Super AI capable of human intelligence (or more). But the truth is Pepper is a narrow AI which basically means it can support humans in solving problems only in specific cases. All too often fear comes when we assume otherwise and think that these systems have superintelligence, also known as the ‘singularity’ – this is a myth. The fact is that even the experts do not know when that will happen, if at all and it certainly hasn’t happened yet.
For instance, according to this piece of research, the median estimate of ‘expert’ respondents in this case suggested a one in two chance that high level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075… So soon, but not SO soon? And even then, there is a huge debate about what that looks like for humanity.
We also know that there is a struggle to define AI. Given that there is such a lack of coherence in defining the term ‘AI’ across the academic community, it is little wonder that the public find it hard to define it, or instead conceptualise of it as machine learning or deep learning – when both are subsets of what we might group under the umbrella term ‘AI’.
With respect to Pepper then, it is important to note what Pepper is and is NOT - Due to Pepper’s limited abilities, once people have interacted with Pepper, most begin to see the clear limits of its intelligence and perhaps the fear dwindles a little, but for others and regardless of the fact that really, its intelligence is very narrow, there is a inherent negative reaction toward it.
So my introduction to Pepper revealed that there is more work to be done around debunking myths as such views certainly contribute towards people’s reactions and perceptions of AI. I was left thinking how we can better understand and challenge these perceptions so as to imagine a better future with AI?
We meet again...
Finally, I met Pepper again in December when we held our DC Labs open house event. This time, I decided to use Pepper so as to engage our visitors with our project. I set up a corner of the room to gather views on Pepper as an example of an (albeit narrow) AI. Unsurprisingly, most people had something to say about it…
As people came by I asked their views about the robot and took notes about some of the key themes which emerged from our chats, these are broadly grouped below:
Emotions – people expressed fear, excitement, sadness, intrigue, frustration, isolation, attachment, suspicion. Children did not warm to Pepper. Words like ‘creepy’ and ‘cute’ were used in equal measure!
Appearance and Gender – ‘Pepper is a girl’, ‘Pepper is a little girl, therefore less scary’. ‘Pepper is a male because the name Pepper is masculine’. ‘Pepper seems childlike’. Pepper is ‘cute’, Pepper is ‘adorable’. ‘I prefer a robot that doesn’t look like a human’. ‘Would be worse if it could stand and was on legs’. ‘I don’t like her touching me or moving’. That is ‘creepy’.
Conceptualisation of AI – Pepper is a ‘tangible example of AI. ‘Not what you would normally think of when you think about AI’. ‘Pepper is a PR stunt for AI’.
Functionality – ‘What does it do?’,’Can she hoover?’, ‘Industry should think about the application of AI before it is in use’. ‘What if she makes an error?’, ‘What about the unintended consequences?’
Ethics, bias and trust – ‘Can I trust it?’, ‘Where are the images going when it talks to me?’ I don’t want to look into its eyes – I haven’t given consent’. ‘Just no’.
In many ways, the topics covered here are not particularly surprising but they certainly echo much of what is found in proper research studies and in the literature. Again, this exercise was largely done to promote the project and to hear some views rather than to provide a rigorous account of views on AI, but the initial thoughts and observations provided will serve as food for thought as I begin to formulate the project’s research questions over the coming weeks.
AI Futures Research
This field of enquiry is a multidisciplinary effort. There is a field of research and an increased sociological / philosophical interest in public perceptions and conceptions of AI. With respect to the discussion above, leading centres are investigating these issues and have published research into AI narratives about ‘scary robots‘ and more, while others have surveyed experts to find the mean forecast as to when General AI / human-like AI may be expected.
This work is therefore fundamental to understanding the public response to AI and has the potential to affect a range of other actors including scientists, regulators, technologists and more.
As we see increased investment in research and a growing dialogue between academia, policy actors, industry and the public, it will be important to consider what we actually want from a future with AI, what beneficial AI looks like and how we can get there.
To join us in a conversation about the benefits of a future with AI, contact firstname.lastname@example.org.