Response to the RFI Related to NIST's Assignments Under the Executive Order Concerning AI

Response to the RFI Related to NIST's Assignments Under the Executive Order Concerning AI

The views expressed in this submission are those of the authors and do not represent the views of GovAI.

We welcome the opportunity to respond to the Request for Information (RFI) Related to NIST’s Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning AI. We offer the following submission for your consideration and look forward to future opportunities to provide additional input. Please note that our comments focus on “dual-use foundation models” as defined in Sec. 3(k) of the E.O. 14110.


1. Developing Guidelines, Standards, and Best Practices for AI Safety and Security

NIST AI RMF companion resource for generative AI, Sec. 4.1(a)(i)(A)


In general, we recommend that NIST:

Developers of dual-use foundation models should be encouraged to adopt a range of practices, including:

  • Developing and publishing comprehensive safety policies that explain how they will avoid creating unacceptable risks.
  • Developing risk taxonomies and threat models of the most severe risks.
  • Conducting model evaluations and red-teaming exercises throughout the development lifecycle.
  • Proactively identifying specific evaluation results that—in the absence of further safeguards—would indicate that a model poses an unacceptable risk.
  • Producing (quantitative or semi-quantitative) risk estimates when making particularly high-stakes decisions, especially model release decisions.
  • Combining rules-based and risk-based approaches to making decisions.
  • Engaging in continuous post-deployment monitoring of models for signs of misuse, harm, and unexpected capabilities.
  • Reporting safety incidents to competent authorities.

Guidance and benchmarks for evaluating and auditing AI capabilities, Sec. 4.1(a)(i)(C) and Guidelines for conducting AI red-teaming tests, Sec. 4.1(a)(ii)

  • Developers should not only conduct model evaluations and red-teaming tests, but also aim to advance the science.
  • Additionally, they should subject their models to external scrutiny.

2. Reducing the Risk of Synthetic Content

Report on standards, tools, methods, and practices related to synthetic content, Sec. 4.5(a)

  • Developers should develop and distribute tools and methods to identify (or otherwise address risks from) synthetic content

3. Advance Responsible Global Technical Standards for AI Development

Plan for global engagement on promoting and developing AI consensus standards, cooperation, and coordination, Sec. 11(b)


We recommend that NIST:

  • Engage with the increasing number of AI Safety Institutes globally.
  • Coordinate with European standard-setting processes.
  • Participate in international standard-setting processes.

Research Summary

Footnotes
Further reading