GovAI Annual Report 2020
A few words from the director:
In my view, the governance of AI will become among the most important global issues. 2020 saw many continued developments in AI governance. It is heartening to see how rapidly this field continues to grow, and exciting to be part of that growth.
This report provides a summary of our activities in 2020.
We now have a core team of 9 researchers and a network of 21 affiliates and collaborators. We are excited to have welcomed Visiting Senior Researchers Joslyn Barnhart and Robert Trager. This year we published two major reports, 15 academic publications, an AI governance syllabus, and 5 op-eds/blog posts. Our work covered many topics:
- Theory of Impact for AI governance
- The Windfall Clause
- Cooperative AI
- Clarifying the logic of strategic assets
- National security and antitrust
- AI and corporate intellectual property strategies
- AI researcher responsibility and impact statements
- Historical economic growth trends
- AI and China
- Trustworthy AI development
- And more…
As I argued in AI Governance: Opportunity and Theory of Impact, we are highly uncertain of the technical and geopolitical nature of the problem, and so should acquire a diverse portfolio of expertise. Accordingly, our work covers only a small fraction of the problem space. We are excited about growing our team and have big ambitions for further progress. We would like to thank Open Philanthropy, the Future of Life Institute, and the European Research Council for their generous support. As part of the Future of Humanity Institute, we have been immersed in good ideas, brilliant people, and a truly long-term perspective. The University of Oxford, similarly, has been a rich intellectual environment, with increasingly productive connections to the Department of Politics and International Relations, the Department of Computer Science, and the new Ethics in AI Institute.
We are always looking to help new talent get into the field of AI governance, be that through our Governance of AI Fellowship (applications are expected to open in Spring 2021), hiring researchers, finding collaborators, or hosting senior visitors. If you are interested in working with us, visit www.governance.ai for updates on our latest opportunities, or consider reaching out to Markus Anderljung (markus.anderljung@philosophy.ox.ac.uk).
We look forward to seeing what we can all achieve in 2021.
Allan Dafoe
Director, Centre for the Governance of AI
Associate Professor and Senior Research Fellow
Future of Humanity Institute, University of Oxford
Research
You can find all our publications here. Our 2019 annual report is here; 2018 report here.
Major Reports and Academic Publications
- “Open Problems in Cooperative AI” (2020). Allan Dafoe, Edward Hughes (DeepMind), Yoram Bachrach (DeepMind), Teddy Collins (DeepMind & GovAI affiliate), Kevin R. McKee (DeepMind), Joel Z. Leibo (DeepMind), Kate Larson, and Thore Graepel (DeepMind). arXiv. (link)
Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at scales ranging from our daily routines—such as highway driving, scheduling meetings, and collaborative work—to our global challenges—such as arms control, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. The authors see an opportunity for the field of Artificial Intelligence to explicitly focus effort on this class of problems which they term Cooperative AI. As part of this we co-organized a NeurIPS workshop: www.cooperativeAI.com
- “The Windfall Clause: Distributing the Benefits of AI for the Common Good” (2020). Cullen O’Keefe (OpenAI & GovAI affiliate), Peter Cihon (GitHub & GovAI affiliate), Carrick Flynn (CSET & GovAI affiliate), Ben Garfinkel, Jade Leung, and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (report) (article summary).
The windfall clause is a policy proposal to devise a mechanism for AI developers to make ex-ante commitments to distribute a substantial part of profits back to the global commons if they were to capture an extremely large part of the global economy via developing transformative AI. The project was run by GovAI, and inspired the Partnership on AI’s launch of their Shared Prosperity Initiative.
- “The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse of the Technology?” (2020). Toby Shevlane and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
The existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this article argues that the same cannot be assumed for AI research. It provides a theoretical framework for thinking about the offense-defense balance of scientific knowledge.
- “U.S. Public Opinion on the Governance of Artificial Intelligence” (2020). Baobao Zhang and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
The report presents the results from an extensive survey into 2,000 Americans’ attitudes toward AI and AI governance. The full results were published in 2019 here.
- “Social and Governance Implications of Improved Data Efficiency” (2020). Aaron Tucker (Cornell University), Markus Anderljung, and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency on e.g. market concentration, malicious use, privacy, and robustness.
- “Institutionalising Ethics in AI: Reflections on the NeurIPS Broader Impact Requirement” (Forthcoming). Carina Prunkl (Ethics in AI Institute & GovAI affiliate), Carolyn Ashurst, Markus Anderljung, Helena Webb (University of Oxford), Jan Leike (OpenAI), and Allan Dafoe. Nature Machine Intelligence.
Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this article, we reflect on a novel governance initiative by one of the world’s most prestigious AI conferences: NeurIPS.
- “Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter” (2020). Nathan Calvin (GovAI affiliate) and Jade Leung (GovAI affiliate). GovAI Working Paper. (link).
This working paper is a preliminary analysis of the legal rules, norms, and strategies governing AI-related intellectual property (IP). It analyzes the existing AI-related IP practices of select companies and governments, and provides some tentative predictions for how these strategies and dynamics may continue to evolve in the future.
- “How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents” (2020). Cullen O’Keefe (OpenAI & GovAI affiliate). GovAI Technical Report. (link).
Artificial Intelligence—like past general purpose technologies such as railways, the internet, and electricity—is likely to have significant effects on both national security and market structure. These market structure effects, as well as AI firms’ efforts to cooperate on AI safety and trustworthiness, may implicate antitrust in the coming decades. Meanwhile, as AI becomes increasingly seen as important to national security, such considerations may come to affect antitrust enforcement. By examining historical precedents, this paper sheds light on the possible interactions between traditional—that is, economic—antitrust considerations and national security in the United States.
- “The Logic of Strategic Assets” (2020). Jeffrey Ding and Allan Dafoe. Forthcoming. Security Studies. (link).
This paper asks what makes an asset strategic, in the sense of warranting the attention of the highest levels of the state. By clarifying the logic of strategic assets, it could move policymakers away from especially unhelpful rivalrous industrial policies, and can clarify the structural pressures that work against global economic liberalism. The paper applies this analysis to AI.
- “Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society” (2020). Carina Prunkl (Ethics in AI Institute & GovAI affiliate) and Jess Whittlestone (CFI). Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
This article considers the extent to which there is a tension between focusing on the near and long term AI risks.
- “Beyond Privacy Trade-offs with Structured Transparency” Andrew Trask (DeepMind & GovAI affiliate), Emma Bluemke (University of Oxford), Ben Garfinkel, Claudia Ghezzou Cuervas-Mons (Imperial College London), Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
Many socially valuable activities depend on sensitive information, such as medical research, public health policies, political coordination, and personalized digital services. This is often posed as an inherent privacy trade-off: we can benefit from data analysis or retain data privacy, but not both. Across several disciplines, a vast amount of effort has been directed toward overcoming this trade-off to enable productive uses of information without also enabling undesired misuse, a goal we term ‘structured transparency’. In this paper, we provide an overview of the frontier of research seeking to develop structured transparency. We offer a general theoretical framework and vocabulary, including characterizing the fundamental components — input privacy, output privacy, input verification, output verification, and flow governance — and fundamental problems of copying, bundling, and recursive oversight. We argue that these barriers are less fundamental than they often appear. We conclude with several illustrations of structured transparency — in open research, energy management, and credit scoring systems — and a discussion of the risks of misuse of these tools.
- “Public Policy and Superintelligent AI: A Vector Field Approach” (2020). Nick Bostrom, Allan Dafoe, and Carrick Flynn (CSET & GovAI affiliate). Ethics of Artificial Intelligence, Oxford University Press, ed. S. Matthew Liao. (link).
The chapter considers the speculative prospect of superintelligent AI and its normative implications for governance and global policy.
- “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims” (2020). Miles Brundage et al. arXiv. (link)
This report suggests various steps that different stakeholders in AI development can take to make it easier to verify claims about AI development, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. Implementation of such mechanisms can help make progress on the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion. The mechanisms outlined in this report deal with questions that various parties involved in AI development might face. (Note: The work was led by researchers at OpenAI and there were 59 contributing authors to this report. Of these, 3 were GovAI researchers and 6 were GovAI affiliates).
Other Academic Publications
- “The Suffragist Peace” (2020). Joslyn N. Barnhart (UCSD), Robert F. Trager (UCLA), Elizabeth N. Saunders (Georgetown University) and Allan Dafoe. International Organization. (link)
Drawing on theory, a meta-analysis of survey experiments in international relations, and analysis of cross national conflict data, the paper shows how features of women’s preferences about the use of force translate into specific patterns of international conflict. When empowered by democratic institutions and suffrage, women’s more pacific preferences generate a dyadic democratic peace (i.e., between democracies), as well as a monadic peace. The analysis supports the view that the enfranchisement of women is essential for the democratic peace. The results were summarised in Foreign Affairs, by the same authors.
- “Coercion and the Credibility of Assurances” (Forthcoming). Matthew Cebul (University of Michigan), Allan Dafoe, and Nuno Monteiro (Yale University). Journal of Politics. (link).
This paper offers a theoretical framework exploring the causes and consequences of assurance credibility and provides empirical support for these claims through a nationally-representative, scenario-based survey experiment that explores how US citizens respond to a hypothetical coercive dispute with China.
- “Coercion and Provocation” (Forthcoming). Allan Dafoe, Sophia Hatz (Uppsala University), and Baobao Zhang. The Journal of Conflict Resolution. (link).
In this paper the authors review instances of apparent provocation in interstate relations and offer a theory based on the logic of reputation and honor. Using survey experiments they systematically evaluate whether provocation exists and what may account for it and employ design-based causal inference techniques to evaluate their key hypotheses.
- “The biosecurity benefits of genetic engineering attribution” (2020). Gregory Lewis … Jade Leung (GovAI affiliate), Allan Dafoe, et al. Nature Communications. (link).
A key security challenge in biotechnology involves attribution: determining, in the wake of a human-caused biological event, who was responsible. The article discusses a technique which could be developed into powerful forensic tools to aid the attribution of outbreaks caused by genetically engineered pathogens.
Opinion Articles, Blog Posts, and Other Public Work
- “AI Governance: Opportunity and Theory of Impact” (2020). Allan Dafoe. Effective Altruism Forum. (link).
This piece describes the opportunity and theory of impact of work in the AI governance space from a longtermist perspective. The piece won an Effective Altruism Forum Prize and was the most highly voted post of September.
- “A Guide to Writing the NeurIPS Impact Statement” (2020). Carolyn Ashurst (Ethics in Ai Institute), Markus Anderljung, Carina Prunkl, Jan Leike (OpenAI), Yarin Gal (University of Oxford, CS dept.), Toby Shevlane, and Allan Dafoe. Blog post on Medium. (link).
This guide was written in light of NeurIPS — the premier conference in machine learning — introducing a requirement that all paper submissions include a statement of the “potential broader impact of their work, including its ethical aspects and future societal consequences.” The post has garnered over 14,000 views, more than the approximately 12,000 abstract submissions received by the conference.
- “Does Economic History Point Toward a Singularity?” (2020). Ben Garfinkel. Effective Altruism Forum. (link).
Over the next several centuries, is the economic growth rate likely to remain steady, radically increase, or decline back toward zero? This piece investigates the claim that historical data suggests growth may increase dramatically. Specifically, it looks at the hyperbolic growth hypothesis: the claim that, from at least the start of the Neolithic Revolution up until the 20th century, the economic growth rate has tended to rise in proportion with the size of the global economy. The piece received the Effective Altruism Forum Prize for best post in September.
- “Ben Garfinkel on scrutinising classic AI risk arguments” (2020). Ben Garfinkel. 80,000 hours podcast. (link)
Longtermist arguments for working on AI risks originally focussed on catastrophic accidents. Ben Garfinkel makes the case that these arguments often rely on imprecisely defined abstractions (e.g. “optimisation power”, “goals”) and toy thought experiments. It is not clear that these constitute a strong source of evidence. Nevertheless, working in AI governance or AI Safety still seems very valuable.
- “China, its AI dream, and what we get wrong about both.” (2020). Jeffrey Ding. 80,000 hours podcast. (link)
Jeffrey Ding discusses his paper “Deciphering China’s AI Dream” and other topics including: analogies for thinking about AI influence; cultural cliches in the West and China; coordination with China on AI; private companies vs. government research.
- Talk: “AI Social Responsibility” (2020). Allan Dafoe. AI Summit London. (link)
AI Social Responsibility is a framework for collectively committing to make responsible decisions in AI development. In this talk, Allan Dafoe outlines that framework and explains its relevance to current AI governance initiatives.
- “Consultation on the European Commission’s White Paper on Artificial Intelligence: a European approach to excellence and trust” (2020). Stefan Torges, Markus Anderljung, and the GovAI team. Submission of the Centre for the Governance of AI. (link)
The submission presents GovAI’s recommendations regarding the European Union’s AI strategy. Analysis and recommendations focus on the proposed “ecosystem oftrust” and associated international efforts. We believe these measures can mitigate the risks that this technology poses to the safety and rights of Europeans.
- “Contact tracing apps can help stop coronavirus. But they can hurt privacy.” (2020). Toby Shevlane, Ben Garfinkel and Allan Dafoe. Washington Post. (link)
Contact tracing apps have reignited debates over the trade-off between privacy and security. Trade-offs can be minimised through technologies which allow “structured transparency”. These achieve both high levels of privacy and effectiveness through the careful design of information architectures — the social and technical arrangements that determine who can see what, when and how.
- “Women’s Suffrage and the Democratic Peace” (2020). Joslyn Barnhart, Robert Trager, Elizabeth Saunders (Georgetown), and Allan Dafoe. Foreign Affairs. (link)
Presenting the ideas from “The Suffragist Peace”.
- “Artificial Intelligence and China” (2020). Jeffrey Ding, Sophie-Charlotte Fischer, Brian Tse, and Chris Byrd. GovAI Syllabus. (link).
In recent years, China’s ambitious development of artificial intelligence (AI) has attracted much attention in policymaking and academic circles. This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development.
- “The Rapid Growth of the AI Governance Field” (2020). Allan Dafoe and Markus Anderljung. AI Governance in 2019 — A Year in Review: Observations from 50 Global Experts, ed. Li Hui & Brian Tse. (link)
This report was contributed to by 50 experts from 44 institutions, including AI scientists, academic researchers, industry representatives, policy experts, and others.
- “The Case for Privacy Optimism” (2020). Ben Garfinkel. Blog post. (link).
This blog post argues that social privacy — from the prying eyes of e.g. family, friends, and neighbours — has increased over time, and may continue to do so in the future. While institutional privacy has decreased, it may be counteracted by the increase in social privacy.
Events
Webinars
- “COVID-19 and the Economics of AI” with Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz. Allan Dafoe and Anton Korinek were discussants.
- “The Design of Facebook’s Oversight Board” with Noah Feldman. Sophie-Charlotte Fischer, and Gillian Hadfield were discussants.
- “Economic Growth in the Long Run: Artificial Intelligence Explosion or an Empty Planet?” with Ben Jones and Chad Jones. Anton Korinek, Rachael Ngai, and Phil Trammell were discussants.
- “Censorship’s Implications for Artificial Intelligence” with Margaret Roberts. Jeffrey Ding and Allan Dafoe were discussants.
- “Democratic Capitalism at the Crossroads: Technological Change and the Future of Politics” with Carles Boix and Sir Tim Besley. Allan Dafoe was a discussant.
Workshops co-organized by GovAI
- Cooperative AI Workshop at the NeurIPS 2020 conference. Speakers included: James D. Fearon (Stanford), Gillian Hadfield (University of Toronto), William Isaac (Deepmind), Sarit Kraus (Bar-Ilan University), Peter Stone (Learning Agents Research Group), Kate Larson (University of Waterloo), Natasha Jaques (Google Brain), Jeffrey S. Rosenschein (Hebrew University), Mike Wooldridge (University of Oxford), Allan Dafoe, Thore Graepel (Deepmind).
- Navigating the Broader Impacts of AI Research Workshop at the NeurIPS 2020 conference. Speakers: Hanna Wallach (Microsoft), Sarah Brown (University of Rhode Island), Heather Douglas (Michigan State University), Iason Gabriel (DeepMind, NeurIPS Ethics Advisor), Brent Hecht (Northwestern University, Microsoft), Rosie Campbell (Partnership on AI), Anna Lauren Hoffmann (University of Washington), Nyalleng Moorosi (Google AI), Vinay Prabhu (UnifyID), Jake Metcalf (Data & Society), Sherry Stanley (Amazon Mechanical Turk), Deborah Raji (Mozilla), Logan Koepke (Upturn), Cathy O’Neil (O’Neil Risk Consulting & Algorithmic Auditing), Tawana Petty (Stanford University), Cynthia Rudin (Duke University), Shawn Bushway (University at Albany), Miles Brundage (OpenAI & GovAI affiliate), Bryan McCann (formerly Salesforce), Colin Raffel (University of North Carolina at Chapel Hill, Google Brain), Natalie Schluter (Google Brain, IT University of Copenhagen), Zeerak Waseem (University of Sheffield), Ashley Casovan (AI Global), Timnit Gebru (Google), Shakir Mohamed (DeepMind), Aviv Ovadya (Thoughtful Technology Project), Solon Barocas (Microsoft), Josh Greenberg (Alfred P. Sloan Foundation), Liesbeth Venema (Nature), Ben Zevenbergen (Google), Lilly Irani (UC San Diego).
- We hosted a CNAS-FHI Workshop on AI and International Stability in January.
Selected publications by research affiliates
- “Economic Growth under Transformative AI: A guide to the vast range of possibilities for output growth, wages, and the labor share” (2021). Philip Trammell (GPI) and Anton Korinek (UVA and GovAI affiliate). Global Priorities Institute Working Paper. (link)
- “Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy” (2020). Tom Farrand, Fatemehsadat Mireshghallah (UCSD), Sahib Singh (Ford), Andrew Trask (DeepMind & GovAI affiliate). Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice. (link)
- “COVID-19 Infection Externalities: Trading Off Lives vs. Livelihoods” (2020). Zachary A. Bethune (University of Virginia) and Anton Korinek (University of Virginia & GovAI affiliate). NBER Working Paper. (link)
- “Nonpolar Europe? Examining the causes and drivers behind the decline of ordering agents in Europe” (2020). Hiski Haukkala (University of Tampere & GovAI affiliate). International Politics. (link)
- “All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation” (2020). Sarah E. Kreps (Cornell), Miles Mcain (Stanfaord), and Miles Brundage (OpenAI & GovAI affiliate). SSRN. (link)
- “Messier than Oil: Assessing Data Advantage in Military AI” (2020). Husanjot Chahal (CSET), Ryan Fedasiuk (CSET), and Carrick Flynn (CSET & GovAI affiliate). CSET Issue Brief. (link)
- “The Chipmakers: U.S. Strengths and Priorities for the High-End Semiconductor Workforce” (2020). Will Hunt (CSET) and Remco Zwetsloot (CSET & GovAI affiliate). CSET Issue Brief. (link)
- “Antitrust-Compliant AI Industry Self-Regulation” (2020). Cullen O’Keefe (OpenAI & GovAI affiliate). Working Paper. (link)
- “Have Your Data and Use It Too: A Federal Initiative for Protecting Privacy while Advancing AI.” (2020). Roxanne Heston (CSET) and Helen Toner (CSET & GovAI affiliate). Day One Project. (link)
- “Americans’ Perceptions of Privacy and Surveillance in the COVID-19 Pandemic.” (2020). Baobao Zhang (Cornell & GovAI affiliate), Sarah Kreps (Cornell), Nina McMurry (WZB Berlin Social Science Center), and R. Miles McCain (Stanford University). PLoS ONE. Replication files. Coverage in Bloomberg and IEEE Spectrum; shared with the World Health Organization. (link)
Team and Growth
Our team has grown substantially. In 2020 we welcomed Robert Trager and Joslyn Barnhart as Visiting Senior Research Fellows and Eoghan Stafford as a Visiting Researcher. We ran another round of the GovAI Fellowship and welcomed 7 Fellows, with an acceptance rate of around 5%. Our management team also evolved, with Alexis Carlier joining as a Project Manager following Jade Leung’s departure.
We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. In 2021, we plan to continue our GovAI Fellowship programme, engaging with PhD researchers primarily in Oxford, and hiring additional researchers.