GovAI Annual Report 2019

The governance of AI is in my view the most important global issue of the coming decades. 2019 saw many developments in AI governance. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth.

This report provides a summary of our activities in 2019.

We now have a core team of 7 researchers and a network of 16 research affiliates and collaborators. This year we published a major report, nine academic publications, four op-eds, and our first DPhil (read Oxfordese for PhD) thesis and graduate! Our work covered many topics:

  • US public opinion about AI
  • The offense defense balance of AI and scientific publishing
  • Export controls
  • AI standards
  • The technology life-cycle of AI domestic politics
  • A proposal for how to distribute the long-term benefits from AI for the common good
  • The social implications of increased data efficiency
  • And others…

This, however, just scratches the surface of the problem, and we are excited about growing our team and ambitions to better make progress. We are fortunate in this respect to have received financial support from, among others, the Future of Life Institute, the Ethics and Governance of AI Initiative, and especially from the Open Philanthropy Project. We are also fortunate to be a part of the Future of Humanity Institute, which is dense with good ideas, brilliant people, and a truly long-term perspective. The University of Oxford similarly has been a rich intellectual environment, with increasingly productive connections with the Department of Politics and International Relations, the Department of Computer Science, and the new AI Ethics Institute.  

As part of our growth ambitions for the field and GovAI, we are always looking to help new talent get into the field of AI governance, be that through our Governance of AI Fellowship, hiring researchers, finding collaborators, or hosting senior visitors. If you’re interested, visit www.governance.ai for updates on our latest opportunities, or consider reaching out to Markus Anderljung (markus.anderljung@philosophy.ox.ac.uk).

We look forward to seeing what we can all achieve in 2020.

Allan Dafoe
Director, Centre for the Governance of AI
Associate Professor and Senior Research Fellow
Future of Humanity Institute, University of Oxford

Research

Research from previous years available here.

Major Reports and Academic Publications
  • US Public Opinion on Artificial Intelligence by Baobao Zhang and Allan Dafoe. In the report, we present the results from an extensive look at the American public’s attitudes toward AI and AI governance. We surveyed 2,000 Americans with the help of YouGov. As the study of public opinion toward AI is relatively new, we aimed for breadth over depth, with our questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Featured in Bloomberg, Vox, Axios and the MIT Technology Review.
  • How Does the Offense-Defense Balance Scale? in Journal of Strategic Studies by Ben Garfinkel and Allan Dafoe. The article asks how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities.
  • The Interests behind China’s Artificial Intelligence Dream by Jeffrey Ding in the edited volume “Artificial Intelligence, China, Russia and the Global Order”, published by Air University Press. This high-level overview of China’s AI dream, places China’s AI strategy in the context of its past science and technology plans, outlines how AI development intersects with multiple areas of China’s national interests, and discusses the main barriers to China realizing its AI dream.
  • Jade Leung completed her DPhil thesis Who Will Govern Artificial Intelligence? Learning from the history of strategic politics in emerging technologies, which looks at how the control over previous strategic general purpose technologies – aerospace technology, biotechnology, and cryptography – changed over the technology’s lifecycle, and what this might teach us about how the control over AI will shift over time.
  • The Vulnerable World Hypothesis in Global Policy by Nick Bostrom. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the ‘semi‐anarchic default condition’. It was originally published as a working paper in 2018.

A number of our papers were accepted to the AAAI AIES conference (which in the discipline of computer science is a standard form of publishing), taking place in February 2020:

  • The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse of the Technology? by Toby Shevlane and Allan Dafoe. The existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this article argues that the same cannot be assumed for AI research. It provides a theoretical framework for thinking about the offense-defence balance of scientific knowledge.
  • The Windfall Clause: Distributing the Benefits of AI for the Common Good by Cullen O’Keefe, Peter Cihon, Carrick Flynn, Ben Garfinkel, Jade Leung and Allan Dafoe. The windfall clause is a policy proposal to devise a mechanism for AI developers to make ex-ante commitments to distribute a substantial part of profits back to the global commons if they were to capture an extremely large part of the global economy via developing transformative AI.
  • U.S. Public Opinion on the Governance of Artificial Intelligence by Baobao Zhang and Allan Dafoe. In the report, we present the results from an extensive survey into 2,000 Americans’ attitudes toward AI and AI governance. The results are available in full here.
  • Near term versus long term AI risk framings by Carina Prunkl and Jess Whittlestone (CSER/CFI). This article considers the extent to which there is a tension between focusing on the near and long term AI risks.
  • Should Artificial Intelligence Governance be Centralised? Design Lessons from History by Peter Cihon, Matthijs Maas and Luke Kemp (CSER). There is need for urgent debate over the question over how the international governance for artificial intelligence should be organised. Can it remain fragmented, or is there a need for a central international organisation? This paper draws on the history of other international regimes to identify advantages and disadvantages involved in centralising AI governance.
  • Social and Governance Implications of Improved Data Efficiency by Aaron Tucker, Markus Anderljung, and Allan Dafoe. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency on e.g. market concentration, malicious use, privacy, and robustness.
Op-Eds & other Public Work
  • Artificial Intelligence, Foresight, and the Offense-Defense Balance, War on the Rocks, by Ben Garfinkel and Allan Dafoe. AI may cause significant changes to the offense-defense balance in warfare. Such changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change. The article summarises the work of How Does the Offense-Defense Balance Scale? in the Journal on Strategic Studies by the same authors.
  • Thinking about Risks from Artificial Intelligence: Accidents, Misuse and Structure, Lawfare by Remco Zwetsloot and Allan Dafoe. Dividing AI risks into misuse risks and accident risks has become a prevailing approach in the AI safety field. This piece argues that a third, perhaps more important, source of risk should be considered: structural risks. AI could shift political, social and economic structures in a direction that puts pressure on decision-makers — even well-intentioned and competent ones — to make costly or risky choices. Conversely, existing political, social and economic structures are important causes of risks from AI, including risks that might look initially like straightforward cases of accidents or misuse.
  • Public Opinion Lessons for AI Regulation Brookings Report by Baobao Zhang. An overwhelming majority of the American public believes that artificial intelligence (AI) should be carefully managed. Nevertheless, the public does not agree on the proper regulation of AI applications, as illustrated by the three case studies in this report: facial recognition technology used by law enforcement, algorithms used by social media platforms, and lethal autonomous weapons.  
  • Export Controls in the Age of AI in War on the Rocks by Jade Leung, Allan Dafoe, and Sophie Charlotte-Fischer. Some US policy makers have expressed interest in using export controls as a way to maintain a US lead in AI development. History, this piece argues, suggests that export controls, if not wielded carefully, are a poor tool for today’s emerging dual-use technologies such as AI. At best, they are one tool in the policymakers’ toolbox, and a niche one at that.
  • GovAI (primarily Peter Cihon) led on a joint submission with the Center for Long Term Cybersecurity (UC Berkeley), the Future of Life Institute, and the Leverhulme Centre for the Future of Intelligence (Cambridge) in response to the US government’s RFI Federal Engagement in Artificial Intelligence Standards.
  • A Politically Neutral Hub for Basic AI Research by Sophie-Charlotte Fischer. This piece argues that a politically neutral hub for basic AI research, committed to the responsible, inclusive, in addition to peaceful development and use of the new technologies should be set up.
  • Ben Garfinkel has been doing research on AI risk arguments, exemplified in his Reinterpreting AI and Compute, a number of internal documents (many of which are shared with OPP), his EAG London talk, and an upcoming interview on the 80,000Hours Podcast.
  • ChinAI Newsletter. Jeff Ding continues to produce the ChinAI newsletter, which now has over 6,000 subscribers.
Technical Reports Published on our Website
  • Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission by Cullen O’Keefe. Much of AI governance research focuses on the question of how we can make agreements or commitments now that have a positive impact during or after a transition to a world of advanced or transformative artificial intelligence. However, such a transition may produce significant turbulence, potentially rendering the pre-transition agreement ineffectual or even harmful. This Technical Report proposes some tools from legal theory to design agreements where such turbulence is expected.
  • Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development by Peter Cihon. AI standards work is ongoing at ISO and IEEE, two leading standards bodies. But these ongoing standards efforts risk not addressing policy objectives, such as a culture of responsible deployment and use of safety specifications in fundamental research. Furthermore, leading AI research organizations that share concerns about such policy objectives are conspicuously absent from ongoing standardization efforts. This Technical Report summarises ongoing efforts in producing standards for AI, what their effects might be, and makes recommendations for the AI Governance / Strategy community.
Select publications by our Research Affiliates

Public engagement

Many more of our public appearances (e.g. talks, podcasts, interviews) can be found here. Below is a subset:

Team and Growth

The team has grown substantially. In 2019, we welcomed Toby Shevlane as a Researcher, Ben Garfinkel and Remco Zwetsloot as DPhil scholars, Hiski Haukkala as a policy expert, Ulrike Franke and Brian Tse as Policy Affiliates, in addition to Carina Prunkl, Max Daniel, and Andrew Trask as Research Affiliates. 2019 also saw the launch of our GovAI Fellowship, which received over 250 applications and welcomed 5 Fellows in the Summer. We will continue to run this Fellowship in 2020, running a Spring and Summer cohort.

We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. In 2020, we plan to continue our GovAI Fellowship programme, engaging with PhD researchers particularly in Oxford, and hiring additional researchers.

Footnotes
Further reading