
Who's who in AI Futures research? by Jenn Chubb
As part of the AI Futures project, I'm investigating the impact of AI on the future of humanity, ultimately aiming to find ways to achieve desirable outcomes.
In this blog piece, I provide a snapshot of the key institutes whose work is focused on the future of AI. Watch this space for my updates as DC Labs joins the conversation about the future of AI.
Figure 1. A non exhaustive depiction of centres and institutes researching AI Futures.
Institute for Ethics in AI. Oxford University, UK
Perhaps the most prevalent in the UK media lately has been news of an unprecedented investment in the humanities at the University of Oxford. With a planned opening in 2024, the Schwarzman Centre will house the new Institute for Ethics in AI, which will lead the study of the ethical implications of artificial intelligence and other new computing technologies. The new institute brings together the different humanities faculties, leading philosophers, scientists, experts and users of AI in business, academia and government. The institute aims to promote AI ethics, hold seminar series and house a dedicated space for research of this nature. An interesting video is posted here, introducing the new institute.
Oxford has long been known for its work in this area with the Oxford Future of Humanity Institute, and its programme on the impacts of future technology within Oxford Martin School, headed by Director, Nick Bostrom.
Leverhulme Centre for the Future of Intelligence, UK
“Our vision is to build an interdisciplinary community of researchers, with strong links to technologies and the policy world, and a clear practical goal – to work together to ensure that we humans make the best of the opportunities of artificial intelligence, as it develops over the coming decades.”
The Leverhulme Centre for the Future of Intelligence is a collaboration between Leverhulme Trust, University of Cambridge, University of Oxford, Berkeley University of California and Imperial College London. It aims to bring together “the best of human intelligence so that we can make the most of machine intelligence”. This large, global disciplinary group is working on the impact of AI, considering what is real, what is myth, the role of trust in society and avoiding the mis and irresponsible use of AI. This group is asking key questions such as what it means to be human and how do we avoid issues of bias and gendered AI. The group is keen to think in a creative and diverse way about the positive ways AI can influence humanity for the good. The Center runs a series of programmes on the nature and impact of AI.
-
Futures and responsibility – understanding how the long term development of AI can be safe and beneficial to humanity through projects looking at responsible innovation and policy, autonomous weapons and regulation, horizon scanning and road-mapping and value alignment to prevent systems from acting in ways contrary to human values. The Centre, with the Ada Lovelace Institute have recently published Ethical and societal implications of algorithms, data and artificial intelligence: a roadmap for research.
-
Trust and Society – examining the impact of AI systems on society today. Projects investigate trust and transparency , politics and democracy and faith and AI.
-
Kinds of Intelligence – exploring the nature of intelligence, through integrating the study of machines, humans and other animals. Projects explore Augmented Intelligence, Consciousness and Intelligence, Creative Intelligence and Animal – AI Olympics, investigating the possibilities for bi-directional knowledge transfer between biological and artificial intelligence.
-
Narratives and Justice – understanding the cultural contexts shaping how AI is perceived and developed, and the consequences this has for diversity, cognitive justice and social justice. Projects include AI narratives, Global AI narratives , Decolonising AI, History of AI and AI and Gender.
-
Philosophy and the ethics of AI – unpicking ethical challenges of AI using moral philosophy, philosophy of science and social science. Projects include Agents and Persons , Medical Black Boxes and AI explainability , Science of intelligence and its values, Ethics of augmentation and ethical and societal implications of algorithms, data and AI.
Ada Lovelace Institute, UK
“The Ada Lovelace Institute is an independent research and deliberative body with a mission to ensure data and AI work for people and society. Ada will promote informed public understanding of the impact of AI and data-driven technologies on different groups in society.”
The Ada Lovelace Institute , which was established by the Nuffield Foundation in 2018 in collaboration with the Turing Institute, Royal Society, British Academy, Royal Statistical Society, Wellcome Trust, the Omidyar Network for Citizens and Governance, techUK and the Nuffield Council on Bioethics, aims to do rigorous research and contribute to the debate on how data and AI affect humanity. They bring together voices on the ethics of data and AI and aim to inform good practice in the use of data and AI. Their latest news can be found here.
The Alan Turing Institute, UK
Another institute whose work is focused in this area is the Alan Turing Institute – the National institute for data science and AI. The Institute operates as a hub for collaborative research both pure and applied, across disciplines and universities and trains the next generation of data science and AI leaders.The Turing is leading two flagship areas of research; the first to undertake and apply artificial intelligence research with the goal to transform four key areas of science, industry and government, and the second a pioneering collaboration with the arts and humanities community using data science and artificial intelligence to analyse the human impact of the industrial revolution. These include AI and data science for science, engineering, health and government and Living with Machines – a 5 year project looking at the history of the industrial revolution using data-driven approaches. Other current programmes include: AI, Data science at scale, Data Science for Science , Data-centric engineering,Defence and security, Finance and Economics, Health and medical sciences, public policy, research engineering and urban analytics. The Institute received a £48 million government funding boost for new cutting edge data science and AI research. You can follow the Institute here.
Edinburgh Futures Institute, The University of Edinburgh, UK
The Edinburgh Futures Institute (EFI), an institute at the University of Edinburgh was set up to respond to the challenges posed by the rise in AI and big data in society. The EFI, like other institutes already described, aims to spark new connections both internal and external to the academic world through interdisciplinary research, teaching and engagement. The EFI acts as a vehicle for assembling experts to tackle issues from a range of perspectives to respond to pressing issues raised by AI and big data. In addition to research conducted at the institute, the EFI aims to develop capacity through its graduates. Most relevant is a research centre known as AI-X: SHaping our AI Future. AI-X is a multidisciplinary and culturally inclusive research center looking at the ways AI could develop and impact the world. AI-X brings together thinkers and practitioners with those seeking to understand and shape AI’s influence towards positive outcomes for humanity. AI-X identifies emerging and foreseen challenges in the application of AI,and through international cooperation and collaboration, to ensure we shape a desirable and inclusive global future. This week the University of Edinburgh announced that they have appointed Professor Shannon Vallor from Santa Clara University as chair in the ethics of data and AI at the EFI. This appointment marks the university’s commitment to establishing itself as a leader in harnessing developments in data and AI to benefit society.
According to the website, the main objectives of AI-X is to conduct world leading research in the areas of AI including responsible innovation, reducing harm, promotion of data justice and human autonomy. To develop education about data and AI and to engage with knowledge exchange. AI-X is multidisciplinary, focusing on AI today but also what’s on the horizon. Some of the challenges interesting them as listed on their website include:
-
How do we sustain a culture of critical ethical awareness?
-
How do we surface and address issues of bias in AI?
-
How do we build a global consensus on the development path for AI?
-
What are the acceptable limits of AI-driven behavior modification?
-
How do we understand AI-driven decision-making?
In addition, the EFI hosts events, for instance the forthcoming lecture ‘fears and dreams of intelligent machines’ and also hosts many other initiatives: Futures, FinTech, Values and Society, Controversies in the data society and the Utopia Lab. Among some of its activities, the EFI is leading in experiential AI, which aims to support the creation of artistic work using machine learning. The University also links the work of Creative Informatics to the EFI. This is an ambitious research programme which aims to fuse the creative industries and research. Other relevant activities and links include work with the Centre for Future Infrastructure aimed at bringing together stakeholders from the university, industry and government.
Engineering and Physical Research Council (EPSRC)
“AI is likely to become an increasingly dominant feature of our world and understanding the future of AI and its impact on society are critically important research areas.”
Another main UK research funder, EPSRC is interested in Artificial Intelligence Technologies. EPSRC focuses a large amount of its energies in funding into maximising opportunities arising from the growing interest in AI. This is in no small way influenced by the Government’s Industrial Strategy, namely via the National Productivity Investment (NPIF). AI and the data economy is one of the Grand Challenges of the Industry Strategy. The EPSRC encourages the role of co-creation with other disciplines including the humanities and social sciences to understand the way AI can be used safely. Grants on the web details EPSRC’s support by research area in AI.
Economic and Social Research Council (ESRC)
In 2016 ESRC commissioned an interdisciplinary and multi-institutional research team to undertake a scoping review; encompassing the social, cultural, economic, political, psychological, and other effects of digitalisation. Led by Professor Simeon Yates, University of Liverpool, the successful research team represented 16 universities from across the UK, EU, USA and Singapore and provides expertise from across a range of arts, engineering, social science and sciences fields.
The findings and recommendations will underpin and inform future areas of ESRC interest; including the progression of the ESRC agenda around AI, Automation and Robotics.
For example:
Call on Ethics and Risk
Further afield, there are several key places globally, whose work is focused on AI and in supporting research focused on the opportunity space of AI futures:
Future of Life Institute (FLI)
One such example is the Future of Life Institute which aims to maximise the social benefit of AI through dialogue with leading globals experts and through robust interdisciplinary research. The Institute has published a range of open letters concerning the social benefits of AI on the subjects of ethics, the digital economy and autonomous weapons and their research programme priorities for robust and beneficial AI can be found here. Their research grant funding is available for those wishing to look into ways of making AI systems robust and beneficial. The website is a dynamic source of quality information on how to keep technology beneficial and safe for humanity. They do so by bringing together expertise and showcasing case studies, blog posts and talks, books, contacts, essays, media articles and scholarly works on their website.
DeepMind
Perhaps one of the leading global communities looking at AI and its impact is DeepMind (2010) – scientists, engineers and ML experts working together to advance and research AI. Interested in boosting the public benefits, they also tackle challenges with safety and ethics at the core of their focus. DeepMind’s work in games, where researchers could test AI, their legendary AlphaGO program was the first to beat a professional player at Go. In 2014 DeepMind joined Google and their research projects are published in journals like Nature, Science and more. As it has grown, DeepMind has drawn together the disciplines to work on issues concerning AI. These include teams in: Engineering, Research, Science, ethics and society, DeepMind for Google and operations. The group focused on ethics and society is perhaps closest aligned to our project which focuses very much on the impacts of AI. DeepMind recently partnered with the Royal Society on a public lecture series, they created the forum for ethical AI a public engagement programme discussing the use of automated decision making tools and in partnership with Princeton University, they ran workshops on how the criminal justice system uses AI.
The Partnership on AI
The Partnership on AI is another multi-stakeholder organisation conducting research in support of benefiting society in the face of rapid AI development. It brings together academics, organisations, companies and other groups to better understand the impact of AI. It’s key thematic areas of interest include Safety Critical AI, Fair, transparent and accountable AI, AI, labor and the economy, collaborations between people and AI systems, social and societal influences of AI and AI and the social good.
New York University – AI NOW
AI NOW is an interdisciplinary research institute examining the social implications of artificial intelligence. The work of the institute falls under four domains, rights and liberties, labor and automation, bias and inclusion and safety and critical infrastructure. A list of their publications is here and the AINOW institute can be followed here.
The Stanford Institute for Human-Centered Artificial Intelligence
“Scholarly research is needed to measure and manage a host of critical issues, including the extent to which algorithms introduce, compound, or mitigate business risk or bias; a “responsibility gap” between decisions made by machines and people; the use of AI for surveillance, population control, and waging war; and the impact of AI on industry structure, labor markets, economic growth, and trade across nations. This research will inform engagement with industry, government, and civil society to beneficially guide AI’s development.”
Stanford HAI is an interdisciplinary, global hub for AI thinkers, learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and leverage AI’s impact and potential. They are interested in Human Impact, studying and forecasting the human and social impact of AI.
The Institute is conducting research on correcting gender and ethnic bias in AI algorithms, and the impact of AI on perceptions of humanhood.
The Centre is also looking at other areas including: Augment Human Capabilities – To design AI applications that augment human capabilities & Intelligence – which develops AI technologies inspired by the versatility and depth of human intelligence.
Humanity+
Formerly the World Transhumanist Association (WTA), Humanity + is an international organization which advocates the ethical use of emerging technologies to enhance human capacities. The aim of Humanity+ is to support discussion and public awareness of emerging technologies, defending human rights, anticipating challenges and encouraging the development of emerging technologies to positively benefit society. The organisation runs conferences and groups. Notable members include philosophers Nick Bostrom and David Pearce.
The AI Institute is a French company with offices in Paris and Boston, it consists of a training program in AI and runs workshops and business analysis.
Humane AI (EU) Human- centred AI design principles to base AI on european values and closer to Europeans. Members of the consortium helped produce a research roadmap based in part on the recommendations from different existing EU policy and research papers, but with a distinctive focus towards a new and original scientific approach to AI based on enhancing European expertise. They also published a report in 2018 looking at future directions in AI and how to ensure Europe’s position in the forefront.
DC Labs joins the conversation:
This is not an exhaustive list of those involved in the research of AI Futures and related work, but it is intended to provide some indication of the global effort taking place to better understand the role of AI in society.
I will be blogging about the DC Labs AI Futures project as it progresses.