On Regulating Downstream AI Developers

On Regulating Downstream AI Developers

Foundation models - models trained on broad data that can be adapted to a wide range of downstream tasks - can pose significant risks, ranging from intimate image abuse, cyberattacks, to bioterrorism. To reduce these risks, policymakers are starting to impose obligations on the developers of these models. However, downstream developers - actors who fine-tune or otherwise modify foundational models - can create or amplify risks by improving a model's capabilities or compromising its safety features. This can make rules on upstream developers ineffective. One way to address this issue could be to impose direct obligations on downstream developers. However, since downstream developers are numerous, diverse, and rapidly growing in number, such direct regulation may be both practically challenging and stifling to innovation. A different approach would be to require upstream developers to mitigate downstream modification risks (e.g. by restricting what modifications can be made). Another approach would be to use alternative policy tools (e.g. clarifying how existing tort law applies to downstream developers or issuing voluntary guidance to help mitigate downstream modification risks). We expect that regulation on upstream developers to mitigate downstream modification risks will be necessary. Although further work is needed, regulation of downstream developers may also be warranted where they retain the ability to increase risk to an unacceptable level.

Research Summary

Footnotes
Further reading