Speeding up to keep up: AI in the research process
A thought piece by Jenn Chubb, Darren Reed and Peter Cowling
The rise and convergence of technologies such as Artificial Intelligence (AI) is shaping the way we live our lives in profound ways and there are concerns over the efficacy of Machine Learning (ML) and AI approaches in a range of settings affecting the social world such as in healthcare and education. This concern extends to university research.
A recent review by UKRI provides a very clear steer on the role that research can play in ensuring a beneficial future with AI, suggesting that there is potential for “AI to allow us to do research differently, radically accelerating the discovery process and enabling breakthroughs”. Though this is still relatively under-explored empirically.
In a recent workshop on AI and science, Professor James Wilsdon described how some similarities can be drawn from the effects of metrics and the need for responsible indicators e.g.(DORA) and The Leiden Manifesto. We know that the research funding community (e.g. Research Council of Norway) have been using Machine Learning (ML) and AI techniques within the research funding system (in grant management and research processes) to increase efficiency. However, further steps are needed to examine the effects and to understand what a responsible use of ML and AI would look like. This requires the research policy community to develop and test different approaches to evaluation and allocation of research funding, such as randomisation and automated decision making techniques.
Major funders of academic research have begun to explore how AI could transform our world and the role they can play in utilising AI as an enabler of new methods, processes, management and evaluation in research. At the same time there is recognition of the potential for disruption to researchers and institutions and clear challenges ahead as we know AI is implicated in researcher efficiency and productivity.
Our journal article in AI & Society involved the analysis of interviews with leading scholars on the potential impact of AI on research practice and culture to show the issues affecting academics and universities today. The role of AI within research policy and practice is an interesting lens through which to investigate AI and society. Drawing on interviews with leading scholars, our research reflected on the role of AI in the research process and its positive and negative implications. To do this, we reflect on the responses to the following questions; “what is the potential role of AI in the research process?” and “to what extent (to whom and in what ways) are the implications of AI in the research workplace positive or negative?”
We found that our interviewees identify positive and negative consequences for research and researchers with respect to collective and individual use.
AI is perceived as helpful with respect to information gathering and other narrow tasks, and in support of impact and interdisciplinarity but can also be seen to threaten the traditional role of the academic and pose issues for particular groups.
The most commonly reported use for AI was to help with narrow, individual problems: to help researchers reveal patterns, increase speed and scale of data analysis and form new hypotheses. Many felt that the advent of web searching and online journal repositories had made it easier to ‘keep up’ with a fast moving research landscape.
I would say one area that it could possibly be useful is just streamlining the research process and helping to maybe – for me, it would be helpful taking care of the more tedious aspects of the research process, like maybe the references of a paper for instance, or just recommending additional relevant articles in a way that is more efficient that what is being done now.
Several participants noted that AI could benefit multidisciplinary research teams with regards to open innovation, public engagement, citizen science and impact. When considering the role of AI in research, participants regularly referred to the idea that AI could act as a bridge beyond the university context and that boundaries could be expanded through greater participation in science. If used to support researchers to develop links with others and to build impact, AI could highlight the University’s civic role. As one participant described it “communicating the potential benefits of our research to the wider world. AI can help us do that” (Arts and Humanities). One participant thought of AI as a kind of potential co-creation tool:
… There is a co-creation between a human author and AI that then creates a new type of story and what would that be and, more importantly, what are the conditions for this to be a real co-creation and not being one controlling the other or vice versa.
Our research suggests that AI has potential for boosting and supporting interdisciplinarity. Overwhelmingly, participants saw real potential for AI in bridging disciplines, which could also reorientate research priorities. For instance, AI can ‘match-make’ people across disciplines.
Some abolition of disciplinary boundaries, some significant massive participation of subjects of study in the design and carrying out of research that is affecting their lives and hopefully pretty soon a reorientation of research priorities to better match what people are generally interested in.
Speeding up to keep up
However, using AI as a way of ‘speeding up - to keep up’ with bureaucratic and metricised processes, may proliferate negative aspects of academic culture in that the expansion of AI in research should assist and not replace human creativity or judgment.
It was felt that AI had implications for the future of the academic role. On the one hand AI would lead to new forms of labour “a profoundly modern job... and in fact, a new economy”, on the other human knowledge will still be required alongside AI:
There will still be people who are studying urban planning, even though there are urban planning AI – there will be people doing that. If [the AI agents] are doing it better than us, fine – we will have scholars preserving the human knowledge and then pointing to why the AI knowledge is so much better. It just comes down to ego or not, in that case.
Despite relative confidence among our participants that AI will not replace established academics, AI is seen to potentially challenge more precarious groups; such as the humanities and early career researchers. The potential of AI to alleviate work pressure comes with an associated paradox, in which personal gain requires a sacrifice of privacy through the gathering of large amounts of data on individuals:
You could imagine a university of the future where there would be much, much, much more data on people and much more understanding of how they learn… I have mixed feelings about it.
When imagining a use for AI in the context of any domain of (human) work, there were concerns about the loss of jobs. In particular, this was seen to threaten certain groups, including early career researchers and researchers from the arts and humanities:
We’ve seen the hiring of fewer and fewer staff in terms of research within the humanities.
A need for meta-research on AI in research
The profound impact of AI… has not yet been realised”, and that AI “can open up new avenues of scientific study and exploration (UKRI, 2021)
Our preliminary research strongly supports this view by providing insights from leading AI Futures scholars. Through this we have a better understanding of the questions to be asked and actions taken to achieve outcomes that balance research quality and researcher quality of life with the demands for impact, measurement and added bureaucracy.
The effects of AI tools in scientific research are already profound. However, research into the future role of AI in the research process needs to go much further to address these challenges, and ask fundamental questions about how AI might assist in providing new tools able to question the values and principles driving institutions and research processes.
We argue that to do this an explicit movement of meta-research on the role of AI in research should be carried out considering the effects for research and researcher creativity. Anticipatory approaches and engagement of diverse and critical voices at policy level and across disciplines should also be considered.
This piece is based upon the following article, written as part of the AI Futures project at Digital Creativity Labs:
About the authors
Dr Jenn Chubb is Research Fellow at the University of York, now with XR Stories. She is interested in all things ethics, science and stories.
Peter Cowling is Professor of AI at Queen Mary, University of London. After decades of researching AI technology, he is now trying to understand what AI technology research is for.
Dr Darren Reed is a Senior Lecturer in Sociology at The University of York.