Regulatory Supervision of Frontier AI Developers

Regulatory Supervision of Frontier AI Developers

Artificial Intelligence (AI) systems have the potential to cause, directly or indirectly, immense harm. They also may immensely improve human welfare. The regulatory challenge is to facilitate the innovation benefits of AI while avoiding the risks that it may pose.

This article joins together two streams of scholarship, advancing a primary claim in each.

One concerns regulatory supervision: the phenomenon that governments grant regulatory staff (supervisors) both information-gathering powers and significant discretion. Supervisors wield real power, with (at times) limited accountability or oversight, and yet they often do so effectively. The first challenge I seek to meet is to understand for what domains, and in what circumstances supervision is an appropriate policy response. My answer is that supervision should be used only where it is necessary: where no other approach to regulation can achieve the regulatory objectives, where the importance of the regulatory objective outstrips the risks posed by granting discretion to public servants, and where supervision could achieve the regulatory objective.

The second stream concerns what to do about the risks posed by frontier artificial intelligence (AI) system. My overall claim in this stream is that regulatory supervision is warranted for frontier AI developers (such as OpenAI, Anthropic, Google DeepMind, and Meta), because no other form of regulation can achieve the Goldilocks ambition of good AI policy.

Research Summary

Footnotes
Further reading