Generative AI Risk Management
Meta’s Open Loop program is excited to have launched its first policy prototyping research program in the United States, which is focused on testing the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0 to ensure that it is clear, implementable and effective at helping companies to identify and manage risks arising from generative AI.
This program gives participating companies the opportunity to learn about NIST's AI RMF, and to understand how it can be applied to managing risks associated with developing and deploying generative AI systems. At the same time, the program will gather evidence on current practices and provide valuable insights and feedback to NIST, which can inform future iterations of the RMF.
PROGRAM REPORTS
This report shares insights and recommendations from companies who have analyzed and started to operationalize NISTs draft guidance on generative AI risk management — “The Generative AI Profile” (NIST AI 600-1). It highlights key areas on which these organizations would like to see further action from NIST such as defining the AI value chain in terms of actors, roles and responsibilities and ensuring that international standards and requirements are flexible, allowing companies to make decisions based on the context of their generative AI use.
This report presents the findings and recommendations of the first phase of the Open Loop US program on Generative AI Risk Management, launched in November of 2023 in partnership with Accenture. The first phase focused around two topics that are key to generative AI risk management and of particular interest for NIST, namely AI red-teaming and synthetic content risk mitigation. This report shares the results of the first phase of the program which took place from January to April 2024 and involved 40 companies.
Leverage collaborative policy prototyping (testing proposed, hypothetical or real policy guidance within a structured program) methodologies to enable cohort members to apply and provide feedback on the NIST AI RMF in order to inform its future iterations as a practical tool for managing AI-related risks.
Inform the practical application of the NIST AI RMF among a diverse group of developers and users of Generative AI products and services, by unlocking insights, showcasing best practices and lessons learned, and by pinpointing gaps and opportunities.
Facilitate exchanges of ideas and solutions among AI companies, experts, and policymakers to drive the evolution of responsible and accountable AI practice.
The participating companies
AI companies across various sectors joined the program, including AI startups, AI risk and assurance companies, and established multinational enterprises across various industries. Individual participants represented a diverse range of expertise, with both senior-level decision-makers and individuals involved in operational aspects of safety, compliance, and technology development.
As a participant, you will:
Meta’s Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies.
Through experimental governance methods, Meta’s Open Loop members co-create policy prototypes and test new or existing approaches to policy, guidance frameworks, regulations, and laws. These multi-stakeholder efforts improve the quality of rulemaking processes by ensuring that new guidance and regulation aimed at emerging technology are effective and implementable.
Open Loop has been running theme-specific programs to operationalize trustworthy AI across multiple verticals, such as Transparency and Explainability in Singapore and Mexico, and Human Centered-AI with an emphasis on stakeholder engagement in India. Beyond AI, we are also testing a playbook to promote the adoption of Privacy Enhancing Technologies in Brazil and Uruguay.
Meta’s Open Loop program has partnered with Accenture and will work closely with other prominent industry players and other organizations in the US. Our collaborative efforts extend to include experts from international organizations, NIST, civil society organizations, academia, and more. Each of these contribute to the program's comprehensive knowledge base and holistic approach. Through these strategic partnerships, we aim to collectively drive the advancement of AI risk management and foster a well-rounded understanding of responsible AI practices.
Should you have any questions, please feel free to get in touch