Conversational AI for Kids: Tips for the Ethical and Responsible Design
Digital Creativity Labs is collaborating with BAFTA award-winning digital agency, Joi Polloi, on an exciting new project that uses AI voice technologies to engage with children. The project, developing a meta-story chat tool to allow children to safely engage with characters from their favourite TV show through voice-assisted smart devices such as Amazon Alexa or Google Home, is nearing completion with a prototype tool, as well as a report which focused on reviewing recent research into the ethical implications of conversational AI for children.
Voice AI for Kids
“Already one in four children between 5 and 16 years of age live in a household with a voice-activated virtual assistant in the UK” (Childwise Monitor, 2019).
Voice activated virtual assistants such as Alexa, Google Home and Siri - or digital assistants that use voice recognition, speech synthesis and Natural Language Processing (NLP) - all use Artificial Intelligence (AI) tools - and are ubiquitous in our lives. From settling family dinner table arguments or choosing your favourite song, to providing life enhancing access for those with mental or physical impairments, these technologies are increasingly popular across the generations though arguably still relatively opaque to the user.
With movement toward thinking about the ethics of AI, the implications of these technologies is increasingly under the spotlight. However, little is known about how children perceive these particular devices and the effects they have for this specific group.
What have we learned about the responsible development of conversational AI for children?
This short blog shares some key findings from our research and a (non-exhaustive) review of the current state of the art on conversational AI for children. It builds on a review of the recent research into the ethical implications of AI and Voice AI which we are hoping to publish shortly.
The Impact of Conversational AI on Children
We grouped the implications of conversational AI for children under four themes:
1. Conversational AI affects cognitive and linguistic development of children (e.g. education, learning and access).
- AI has a role to help young people learn and develop their skills which is often associated with positive aspects of the technology - this is particularly helpful for young people with impairments.
- Conversational AI provides access to internet searching and affects childrens’ ‘question-asking’ behaviour. Consideration should be given to the development of effective content.
- Voice assistants provide young children access to information which would normally require the ability to read and write.
- There is a need to better support children and their parents as voice agents become a greater source of answers to their questions - There is some literature to suggest that the downsides of this is that it might hinder a child’s ability to share information.
- Voice assistants can affect linguistic habits of children, particularly with respect to politeness which may affect ‘their interpersonal dealings later in life’.
- As children’s speech is not yet developed, voice agents might not always understand them. There is a need for inclusive solutions when building conversational AI.
- Voice and tone are important when designing AI for children in order to build a friendly and engaging AI - go beyond this and consider the content of the script in order to enhance the interaction.
- Consider embedding into systems why an agent can or cannot reply to certain demands, this all helps children to really learn.
2. Conversational AI has implications for how children learn moral and social codes of behaviour - this includes how they interact with others and how they treat technology including how they behave socially e.g. civility.
- The literature shows that children form emotional attachments to machines/ computers and even their voice agents - this raises questions about what it is to be human and whether machines could ever be ‘emotional’. Many feel it is something which should be reserved for human-human interaction. This has implications for design if children see the agent in a human way.
- Some children associate machines less with mortality and being human and therefore show less care - some might be more tempted to abuse systems and trick them or deceive them.
- Voice AI may encourage the view that children can expect gratification or ‘immediate responses to their requests’. When designing voice AI for children, it is important to note the effects on civility and how children learn manners.
- As children are still learning how to form dialogue, speech and draw meaning thus developing relationships, these findings are pertinent to any design of a CAI for children - in particular, designers should accommodate and be responsive to ‘the different language of child users of varying demographics’.
3. Conversational AI has wide ranging ethical implications - Privacy / Security / Consent / Permissions / Transparency / Trust / Explainability / Safety / Data Use / Surveillance).
- Privacy is a key concern and the safeguarding of young people and their rights is paramount.
- The responsible development of voice agents designers ought to ask “what kind of values should guide the project and how they should be weighed.”
- Transparency is paramount - what is the system recording - when is it ‘on’?Is it listening? What data is recorded? How is it stored? Why did a system make a particular decision? Designers could include visual indicators to make transparent these issues.
- There are concerns over parental access, expectation and control appear paramount in the literature. Consider what the expectations are of the user and their parents as these may be at odds. Offer parents the opportunity to be involved in privacy discussions and include stakeholders.
- Be alert to secrecy, some research has shown that children feel inclined to speak to voice agents about their secrets, whilst parents are concerned about what conversations their children are having.
- Consider the contextual use of the technology, including everyday norms and practices of the users and the context in which they live.
- Engage thoroughly with children’s privacy law and uphold their rights and dignities at all times.
4. Conversational AI must be inclusive - we are all unique, designers must mitigate against bias (gender, racial, class etc) seeping into the design of conversational AI.
- In order to promote inclusion, consider how biases might seep into the development of technologies, these include gender, age, race and class, for example.
- Research suggests that users assign meaning ( say, gender) to systems. This implies designers ought to think beyond the voice of conversational AI when trying to challenge stereotypes concerning gender/ race in their technologies.
- Developers should be attuned to the fact that accent and tone are important for inclusion and if left unchecked could lead to exclusion and even bias.
- The content and the positioning of what the users want is paramount - which implies the need for greater stakeholder dialogue and consultation. Go beyond the voice to develop content for voice agents, and think deeply about what is appropriate for the use of your technology and its users.
Harnessing Collective Responsibility to Develop Ethical AI
In helping Joi Polloi to consider the ethical development of the technology, we engage in a two-way dialogue to ensure its responsible development. By working together, we anticipate and reflect on potential issues relating to the development of AI Fan Along so as to safeguard the privacy and rights of young people.
We intend to make clear and transparent how and what data is collected, how we mitigate against bias, how far and in what ways we have safeguarded children, what that means for creating a friendly and engaging environment for young people to learn, develop and engage safely, how we might develop a personalised experience while upholding privacy and rights of young people and their parents and how we will ensure the responsible design through stakeholder engagement of the AI Fan Along tool.
As we go forward, the ethical and responsible development of future technologies building on this prototype will be explored by using content from a children’s television show to extend research and development on the project. Dr Jenn Chubb has a fellowship focused on ethics and responsible innovation in storytelling with XR Stories from September 2021 which will continue to work on these issues.
The references of prior research and further detail which helped us to write this blog can be found in our forthcoming paper: Developing Voice Technologies for Children: Technical and Ethical Considerations for the Creative Industries.