Tort Law and Frontier AI Governance

Tort Law and Frontier AI Governance

What role can the tort system play in ensuring responsible AI development before and after regulatory regimes are established?

The development and deployment of highly capable, general-purpose frontier AI systems—such as GPT-4, Gemini, Llama 3, Claude 3, and beyond—will likely produce major societal benefits across many fields. As these systems grow more powerful, however, they are also likely to pose serious risks to public welfare, individual rights, and national security. Fortunately, frontier AI companies can take precautionary measures to mitigate these risks, such as conducting evaluations for dangerous capabilities and installing safeguards against misuse. Several companies have started to employ such measures, and industry best practices for safety are emerging.

It would be unwise, however, to rely entirely on industry and corporate self-regulation to promote the safety and security of frontier AI systems. Some frontier AI companies might employ insufficiently rigorous precautions, or refrain from taking significant safety measures altogether. Other companies might fail to invest the time and resources necessary to keep their safety practices up to date with the rapid pace at which AI capabilities are advancing. Given competitive pressures, moreover, the irresponsible practices of one frontier AI company might have a contagion effect, weakening other companies’ incentives to proceed responsibly as well.

The legal system thus has an important role to play in ensuring that frontier AI companies take reasonable care when developing and deploying highly powerful AI systems...

Read the full article from Lawfare.

Research Summary

Footnotes
Further reading