AI has the potential to be a radically transformative technology. Continued progress could bring profoundly important benefits, including major scientific advances and reductions in illness and poverty. However, this progress could also bring substantial risks.
Governments, technology companies, and other key institutions are facing increasingly difficult decisions about how to respond to the challenges posed by AI. We enable these institutions to make better decisions by producing relevant research and guidance. We help new researchers and practitioners to develop the skills and expertise they need to enter the field by running fellowships and visitor programs. We also strengthen connections by hosting educational events, with the goal of building a thriving global AI governance community.
You can read more about our mission and how we pursue it in our most recent annual report.
The central focus of our research is threats that general-purpose AI systems may pose to security. We seek to understand the risks they pose today, while also looking ahead to the more extreme risks they could pose in the future.
Although our central focus is on threats to security, we also support research on a broader array of risks and opportunities from AI. This includes the risks of unemployment, inequality, lack of privacy, and unaccountable decision-making by companies. We believe that the governance solutions to many of the risks AI poses are intertwined.
Ultimately, most of our research aims to increase understanding of the following questions:
Responsible Development: How can general-purpose AI developers make responsible development and deployment decisions?
Regulation: How can governments use their regulatory toolboxes to ensure that AI developers and users behave responsibly?
International Governance: What role can international coordination play in reducing risks from AI?
Compute Governance: How can public and private decisions about access to compute shape outcomes?
Our researchers have provided knowledge and assistance to decision makers in government, industry, and civil society. Our alumni have gone on to policy roles in government; top AI labs, including DeepMind, OpenAI, and Anthropic; and think-tanks such as the Center for Security and Emerging Technology and RAND. Our initial research agenda, published in 2018, helped define and shape the nascent field of AI governance. Our research developing the framework of “cooperative AI" led to the creation of a $15 million philanthropic foundation. We have made prominent contributions to the ongoing public debate over the risks that advanced AI may pose.
Our researchers have published in leading journals and conferences, including International Organization, NeurIPS, and Science. We have published op-eds in venues such as The Washington Post, War on the Rocks, and Lawfare. Our work has also been covered by publications such as The New York Times, MIT Technology Review, and the BBC.
The Centre for the Governance of AI (GovAI) was founded in 2018, with Allan Dafoe - then an Associate Professor in the International Politics of AI at the University of Oxford - as its Founding Director. GovAI was initially a part of Oxford’s Future of Humanity Institute, before becoming an independent non-profit in 2021. GovAI succeeded two earlier AI governance research groups: the Oxford-based Governance of AI Program (founded in 2017) and the Yale-based Global Politics of AI Research Group (founded in 2016).
GovAI is now led by Ben Garfinkel. Our team and affiliate community possess expertise in a wide variety of domains, including AI regulation, responsible AI development practices, compute governance, AI lab corporate governance, US-China relations, and AI progress forecasting. Our Board contains representatives from academia, philanthropy, and the policy community. Read more about our governance structure and approach to conflicts of interest here.