AI narrative-making and the changing notions of research accessibility
XR Stories Research Fellow and DC Labs associate, Dr Jenn Chubb and Professor Dave Beer from the Department of Sociology, write about perceptions of AI being used to form narrative to boost public, and specifically child, understanding of academic research.
Because of their novelty, it’s often difficult to resist playing with new AI tools. Even more so when, as a researcher, it offers to automatically condense your research paper into more accessible language. In the case of the GPT-3 the promise is to transform research descriptions so as to engage a ‘second grader’. The invitation, in this case, is to allow the AI to play with the narrative in order to render it accessible to some imagined audience. Two things are occurring in this particular game. First is the suggestion that there are audiences that research outputs cannot reach because, it is implied, the story being told cannot be fully comprehended. The second is the suggestion that AI can be used to convert this story and phrasing into something that the audience, in this case a ‘second grader’ can understand. This playful intervention of AI is into the very ‘language games’ of knowledge.
Of course, in these supposed knowledge economies the ability to communicate research to different audiences is a skill in ever increasing demand, not least when lay summaries or ‘plain English abstracts’ are expected in funding proposals and more. This push inevitably creates niches into which AI can be proposed as a solution.
New website tl;dr papers claims to do just this. It seeks to re-narrate research outcomes around a notion of heightened accessibility. The tl;dr tool, powered by GPT-3 - Open AI’s language generator can create human-like text and looks at first sight to be incredibly good at doing so. GPT-3 can generate any kind of text, any kind of story. It can write articles. It can even generate guitar tablature for your favorite song. When GPT-3 was first announced, the AI and related community were rightly concerned about the potential for harm from biases and they raised issues about responsible innovation, design and algorithmic justice and fairness. So too, questions were asked about what kind of ‘intelligence’ the narratives created by GPT-3 evoke. Or whether it simply distracts from the narrative content. At the same time such tools create questions about what notions of accessibility are driving such developments and what value there is in applying these notions to research communication.
The tl:dr tool pushes us to think about the notions of readability and accessibility that are being built into AI systems, especially where there is an underlying expectation for enhanced research communication. The aim may be to automate communications but there remains some hesitancy from the provider: the results, the site tells us, ‘may not be perfect’.
However, at a glance, the reviews look like they are able to add another layer of narrative to research communications, at least from the perspective of turning research into a form of light entertainment. There may be a second objective here, which is to find a way of making research more shareable and more fitting for the logics of social media circulations. Many Twitter posts describe the tool as ‘fun’, ‘excellent’, ‘fab’, ‘OMG hilarious’, ‘incredible’. One response even suggested that the tool provided a means of ‘outsourcing’ of the communications. Here we see the playfulness of engagements with AI seeping into the sharing of the automated outcomes. On the whole, most views were of a light hearted nature - which is perhaps suggestive of how playful narratives are the aim in this particular tool.
To test this out we thought we would look at some of our impressions. In an example from one of our papers about the Ethics of Conversational AI for Children’s Storytelling, the following abstract was run through tl;dr:
(Chubb, J., Missaoui, S., Concannon, S., Maloney, L., & Walker, J. A. (2021). Interactive storytelling for children: A case-study of design and development considerations for ethical conversational AI. International Journal of Child-Computer Interaction, 100403.)
The outcome is, admittedly, an amusing take on the article. On first reading it doesn’t do too bad a job. Till we got to the arms and legs. But many Twitter posts noted a tendency for the tool to focus on one aspect of an abstract. This was the case with this abstract where important elements about the context of the study e.g. ethics, children and storytelling were somewhat missing in the newly constructed narrative.
It would seem that, inevitably, there remains significant potential for misreading and misunderstanding. The arms and legs reference is illustrative of how this device is trying to find meanings to replicate rather than interpreting the actual arguments. Yet the drive for accessibility is likely to mean that such tools are stepping-stones for other applications in which AI are used to condense, summarise and create narratives.
Tl;dr is not the first such ‘readability’ tool to attempt to provide support in making accessible research and academic texts that are thought to be too complex or convoluted. The Paper Digest tool condenses article arguments into key points and seeks to reduce the time it takes a reader to get to grips with research. Enter stage left the long standing ‘Gunning Fog Index’ which attempted to condense text down to an accessible format; ‘An interpretation is that the text can be understood by someone who left full-time education at a later age than the index’, reads the site. This tool, quite different from GPT-3, simply uses a weighted average of the number of words per sentence and the number of long words. Calculated by an algorithm, this tool has its limits in that not all three syllable words for instance, are difficult for people to understand. Take the word ‘difficult’, for instance. In these cases the AI is charged with resolving perceived difficulty and producing something more accessible. In this case the understanding of both difficulty and accessibility become part of how the AI is directed. Of course, the AI isn’t understanding the research itself yet it is trying to offer an interpretation based on the terms used and how they might be rendered into a language that is imagined to be more accessible. What is often lost in such processes is the actual nuances that underpin originality in research. There are of course many other such tools around. More will continue to emerge in the coming years, and this will require an engagement with the very notion of accessibility that underpins these systems and how that accessibility is being automated.
So, is AI a good solution for this kind of engagement? In a recent paper, one of the authors has found that AI was perceived by experts in the field as helpful in supporting the research process with respect to information gathering and other narrow tasks, and in support of impact and interdisciplinarity, open innovation, public engagement and citizen science. When considering the role of AI in research, participants regularly referred to the idea that AI could act as a bridge beyond the university context and that boundaries could be expanded through greater participation in science. The notion of the ‘bridge’ is, we would suggest, important here. This bridge is constructed in a way that responds to existing perceptions and notions of accessibility.
As tools that narrate our research for us escalate we may need to think about the notion of accessibility that is coded into them, how this leads our research to be automatically narrated and what this means for the imagined audiences that are trying to be accessed through the possibilities of AI.
Dr Jenn Chubb and Professor Dave Beer work at the University of York where Jenn is a Research Fellow in XR Stories and Dave is a Professor of Sociology.