Welcome to Open Loop
Meta’s Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies for AI and other emerging technologies.
Through experimental governance methods, Meta’s Open Loop members co-create policy prototypes and test new or existing AI policies, regulations, laws, or voluntary frameworks. These multi-stakeholder efforts support rulemaking processes and improve the quality of guidance and regulations on emerging technologies, ensuring that they are effective and feasible to implement.
We have launched a new series of Open Loop Sprints: a pioneering series of global workshops designed to address the complexities and harness the opportunities of open source AI. These workshops aim to bring together policymakers, industry leaders, academics, and civil society representatives from around the world to collaboratively shape effective and responsible AI policies.
Meta’s Open Loop launched its first policy prototyping research program in the United States in late 2023, focused on testing the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0. This program gave participating companies the opportunity to learn about NIST's AI RMF and subsequent “Generative AI Profile” (NIST AI 600-1), and to understand how this guidance can be applied to developing and deploying generative AI systems. At the same time, the program gathered evidence on current practices and provided valuable insights and feedback to NIST, which can inform future iterations of the RMF and Gen AI profile.
Meta’s Open Loop program is excited to have launched its first policy prototyping program in the United Kingdom, which is focused on testing the Competition and Markets Authority (CMA) AI Principles to ensure that they are clear, implementable and effective at guiding the ongoing development and use of AI Foundation Models, while protecting competition and consumers.
The EU AI Act program is the largest policy prototyping initiative to date, engaging over 60 participants from more than 50 companies developing AI and ML products. The program was structured into three pillars, each focusing on key articles of the EU proposal and assessing and scrutinizing them.
The Open Loop Brazil program was launched in tandem with a twin policy prototyping program in Uruguay, with the aim of guiding and enabling companies in Brazil to leverage and apply privacy-enhancing technologies (PETs) to help deidentify data and mitigate privacy-related risks.
The Open Loop India program was a collaborative effort between Meta, ArtEZ University of the Arts and The Dialogue, to develop a stakeholder engagement framework that operationalizes the principle of human-centered AI.
This program aimed to develop and test a risk assessment framework, called ADIA (Automated Decision Impact Assessment), for AI applications deployed in Europe.
The Open Loop Uruguay program was launched in tandem with a twin policy prototyping program in Brazil, with the aim of guiding and enabling companies in Uruguay to leverage and select privacy-enhancing technologies (PETs) to help de-identify data and mitigate privacy-related risks.
OUR METHODOLOGY
A fresh take on policy
innovation
Meta’s Open Loop leverages policy prototyping and human-centered design methods to test existing governance frameworks for emerging technologies or to co-develop and evaluate new governance frameworks, with the aim of ultimately providing evidence-based input that can improve existing governance frameworks and/or inform lawmaking processes.
This report shares insights and recommendations from companies who have analyzed and started to operationalize NISTs draft guidance on generative AI risk management — “The Generative AI Profile” (NIST AI 600-1). It highlights key areas on which these organizations would like to see further action from NIST such as defining the AI value chain in terms of actors, roles and responsibilities and ensuring that international standards and requirements are flexible, allowing companies to make decisions based on the context of their generative AI use.
This report presents the findings and recommendations from Open Loop's Privacy-Enhancing Technologies (PETs) program in Brazil, conducted from September 2022 to April 2023. It explores the challenges and opportunities in PETs adoption among Brazilian entities, offering insights into their familiarity with PETs, implementation barriers, and the effectiveness of our policy prototype. The report concludes with actionable policy recommendations to foster responsible PETs adoption in Brazil.
Download the report:
This report details the outcomes of Open Loop's Privacy-Enhancing Technologies (PETs) program in Uruguay, which ran from September 2022 to April 2023. It examines the landscape of PETs adoption in Uruguay, highlighting entities' experiences with our policy prototype, their current understanding of PETs, and the obstacles they face in implementation. The report culminates in evidence-based policy recommendations aimed at promoting PETs adoption in Uruguay's unique context.
Download the report:
This report presents the findings and recommendations of the Open Loop India program.
This report presents the findings and recommendations of the first phase of the Open Loop US program on Generative AI Risk Management, launched in November of 2023 in partnership with Accenture. The first phase focused around two topics that are key to generative AI risk management and of particular interest for NIST, namely AI red-teaming and synthetic content risk mitigation. This report shares the results of the first phase of the program which took place from January to April 2024 and involved 40 companies.
MEXICO | August 2023
This report unveils the outcomes and strategic insights from the Open Loop Mexico program on AI Transparency and Explainability. The initiative focused on crafting and testing a Public Policy Prototype on the Transparency and Explainability of Artificial Intelligence Systems, including Automated Decision-Making (ADM) systems.
Download the report:
EUROPE | JUNE 2023
The report presents the findings and recommendations of the first part of the Open Loop’s policy prototyping program on the European Artificial Intelligence Act (AIA), which was rolled out in Europe from June 2022 to July 2022, in partnership with Estonia’s Ministries of Economic Affairs and Communications and Justice, and Malta’s Digital Innovation Authority (MDIA).
EUROPE | JUNE 2023
This report, which is part of the Open Loop Program on the EU Artificial Intelligence Act (AIA), presents the findings of a policy prototyping exercise on risk management and transparency in the AIA. The objective of this Deep Dive was to assess the clarity and feasibility of selected requirements from the perspective of participating AI companies.
EUROPE | APRIL 2023
In this report, we explore the efficacy of the taxonomy of AI actors in the EU Artificial Intelligence Act (AIA) (e.g., provider, user, and importer), proposing an alternative for the taxonomy of AI actors currently included in the proposal. This research is part of the Open Loop Program of the EU AIA.
EUROPE | APRIL 2023
This report, which is part of the Open Loop Program on the EU Artificial Intelligence Act (AIA), explores the AI regulatory sandbox provision described in article 53 of the AIA.1
More specifically, we explored the goals of the AI regulatory sandbox and the conditions necessary to achieve them.
EUROPE | NOVEMBER 2022
The report presents the findings and recommendations of the first part of the Open Loop’s policy prototyping program on the European Artificial Intelligence Act (AIA), which was rolled out in Europe from June 2022 to July 2022, in partnership with Estonia’s Ministries of Economic Affairs and Communications and Justice, and Malta’s Digital Innovation Authority (MDIA).
ASIA PACIFIC | JULY 2022
This report encapsulates the insights and strategic recommendations derived from the Open Loop’s policy prototyping program on AI Transparency and Explainability. Through a rigorous methodological approach, the program captured the nuanced experiences of participants as they implemented the policy prototype within their operations.
EUROPE | JANUARY 2021
This report presents the findings and recommendations of the Open Loop’s policy prototyping program on AI Impact assessment, which was rolled out in Europe from September to November 2020.
Trust, Transparency & Control Labs
Bringing together policy makers, privacy experts and product creators – using design thinking to improve trust, transparency and control in digital products.
GET INVOLVED
Do you have innovative ideas on how to govern emerging technologies?
Do you want to co-develop and test new policy ideas?
We want to hear from you!