Announcing the GovAI Policy Team
The AI governance space needs more rigorous work on what influential actors (e.g. governments and AI labs) should do in the next few years to prepare the world for advanced AI.
We’re setting up a Policy Team at the Centre for the Governance of AI (GovAI) to help address this gap.
The AI governance space needs more rigorous work on what influential actors (e.g. governments and AI labs) should do in the next few years to prepare the world for advanced AI.
We’re setting up a Policy Team at the Centre for the Governance of AI (GovAI) to help address this gap. The team will primarily focus on AI policy development from a long-run perspective. It will also spend some time on advising and advocating for recommendations, though we expect to lean heavily on other actors for that. Our work will be most relevant for the governments of the US, UK, and EU, as well as AI labs.
We plan to focus on a handful of bets at a time. Initially, we are likely to pursue:
- Compute governance: Is compute a particularly useful governance node for AI? If so, how can this tool be used to meet various AI governance goals? Potential goals for compute governance include monitoring capabilities, restricting access to capabilities, and identifying high-risk systems such that they can be put to significant scrutiny.
- Corporate governance: What kinds of corporate governance measures should frontier labs adopt? Questions include: What can we learn from other industries to improve risk management practices? How can the board of directors most effectively oversee management? How should ethics boards be designed?
- AI regulation: What present-day AI regulation would be most helpful for managing risks from advanced AI systems? Example questions include: Should foundation models be a regulatory target? What features of AI systems should be mandated by AI regulation? How can we help create more adaptive and expert regulatory ecosystems?
We’ll try several approaches to AI policy development, such as:
- Back-chaining from desirable outcomes to concrete policy recommendations (e.g. how can we increase the chance there are effective international treaties on AI in the future?);
- Considering what should be done today to prepare for some particular event (e.g. the US government makes an Apollo Program-level investment in AI);
- Articulating and evaluating intermediate policy goals (e.g. “ensure the world’s most powerful AI models receive external scrutiny by experts without causing diffusion of capabilities”);
- Analyzing what can and should be done with specific governance levers (e.g. the three bets outlined above);
- Evaluating existing policy recommendations (e.g. increasing high-skilled immigration to the US and UK);
- Providing concrete advice to decision-makers (e.g. providing input on the design of the US National AI Research Resource).
Over time, we plan to evaluate which bets and approaches are most fruitful and refine our focus accordingly.
The team currently consists of Jonas Schuett (specialisation: corporate governance), Lennart Heim (specialisation: compute governance), and myself (Markus Anderljung, team lead). We’ll also collaborate with the rest of GovAI and people at other organisations.
We’re looking to grow the team. We will be hiring Research Scholars on the Policy Track on a regular basis. We’re also planning to work with people in the GovAI 3-month Fellowship and are likely to open applications for Research Fellows in the near future (you can submit expressions of interest now). We’re happy for new staff to work out of Oxford (where most of GovAI is based), the Bay Area (where I am based), or remotely.
If you’d like to learn more, feel free to leave a comment below or reach out to me at markus.anderljung@governance.ai.