Artificial Intelligence Act
The EU AI Act program is the largest policy prototyping initiative to date, engaging over 60 participants from more than 50 companies developing AI and ML products. The program was structured into three pillars, each focusing on key articles of the EU proposal and assessing and scrutinizing them. The first pillar focused on operationalizing the "Requirements for AI Systems," aimed at understanding how feasible it was for companies to implement these requirements. The second pillar revolved around regulatory sandboxes, assessing their attractiveness and effectiveness. The third pillar aims to present an alternative taxonomy of AI actors regarding the efficacy and suitability of the provider-user paradigm. All these efforts were in line with the overarching goals of the program: providing a basis for consensus-based standards and guidelines, making the EU AI Act clearer, more operational, and technically feasible, and contributing evidence-based inputs to inform the negotiations and final drafting of the EU AI Act.
The Open Loop EU AI Act program focused on testing the effectiveness of the first version of the EU AI Act (2021). We engaged experts and companies to comprehensively test key provisions of the EU AI Act, with the goal of enhancing clarity, technical feasibility, and overall effectiveness.
PROGRAM REPORTS
EUROPE | JUNE 2023
The report presents the findings and recommendations of the first part of the Open Loop’s policy prototyping program on the European Artificial Intelligence Act (AIA), which was rolled out in Europe from June 2022 to July 2022, in partnership with Estonia’s Ministries of Economic Affairs and Communications and Justice, and Malta’s Digital Innovation Authority (MDIA).
EUROPE | JUNE 2023
This report, which is part of the Open Loop Program on the EU Artificial Intelligence Act (AIA), presents the findings of a policy prototyping exercise on risk management and transparency in the AIA. The objective of this Deep Dive was to assess the clarity and feasibility of selected requirements from the perspective of participating AI companies.
EUROPE | APRIL 2023
In this report, we explore the efficacy of the taxonomy of AI actors in the EU Artificial Intelligence Act (AIA) (e.g., provider, user, and importer), proposing an alternative for the taxonomy of AI actors currently included in the proposal. This research is part of the Open Loop Program of the EU AIA.
EUROPE | APRIL 2023
This report, which is part of the Open Loop Program on the EU Artificial Intelligence Act (AIA), explores the AI regulatory sandbox provision described in article 53 of the AIA.1
More specifically, we explored the goals of the AI regulatory sandbox and the conditions necessary to achieve them.
EUROPE | NOVEMBER 2022
The report presents the findings and recommendations of the first part of the Open Loop’s policy prototyping program on the European Artificial Intelligence Act (AIA), which was rolled out in Europe from June 2022 to July 2022, in partnership with Estonia’s Ministries of Economic Affairs and Communications and Justice, and Malta’s Digital Innovation Authority (MDIA).
The Open Loop EU AI Act program, supported by Meta, led to valuable policy recommendations for improving the EU AI Act. The program produced 5 reports covering diverse aspects of AI governance and offering comprehensive insights.
REPORT 1
The Open Loop EU AI Act program, supported by Meta, led to valuable policy recommendations for improving the EU AI Act. The program produced 5 reports covering diverse aspects of AI governance and offering comprehensive insights.
REPORT 2
The second report introduces an alternative taxonomy for AI actors, providing a more accurate representation of the AI ecosystem.
REPORT 3
The third report delves into AI regulatory sandboxes, emphasizing clear goals, technical expertise, and transparency to ensure their success in fostering innovation and compliance.
REPORT 4
The fourth report analyzes risk and transparency, suggesting collaboration between stakeholders, ongoing monitoring, and human oversight to operationalize AI Act requirements. It also recommends a modular approach to provide instructions for AI system outputs and options for documenting AI system development.
REPORT 5
The fifth report highlights the importance of transparency obligations, proposing user-centric notification systems and providing users with more information about AI decision-making processes.
Main Findings & Recommendations
We have seen first-hand insights through our Open Loop program on the EU AI Act how industry and policymakers can cooperate to advance recommendations and proposals which aim to ensure that regulation is clear and feasible, and how it can support developers in the responsible design, development and deployment of trustworthy AI systems. Participants delved into critical areas such as risk management, data quality, technical documentation, transparency obligations, and regulatory sandboxes, providing valuable insights and suggesting recommendations for refining the AI Act.
The majority of participants found the selected provisions clear and feasible, aligning with the legislator's goals of building and deploying trustworthy AI. However, areas for improvement were identified, highlighting the need for ongoing refinement to avoid hindrances and promote the uptake of AI in Europe. The program underscored the importance of precise AI categorization, the need to establish clear responsibilities along the AI value chain, the need for detailed guidance for effective AI risk assessment, paving the way for more accountable AI governance and offered insights for the design of AI regulatory sandboxes.
The resulting policy recommendations underscored the necessity for a nuanced understanding of AI actors, effective risk assessment guidance, practical data quality requirements, clear technical documentation guidance, and targeted transparency measures.
Many of the key recommendations from the Open Loop program around clarifying definitions, providing implementation guidance, tailoring obligations by risk level, and establishing sandboxes were incorporated into the consolidated AI Act of 2024. The Act reflects several priorities outlined in the Open Loop reports.
Partners & Observers
For this Open Loop program, we are partnering with the Malta Digital Innovation Authority and the Government of Estonia, while being supported by Considerati and Hyve Innovate.
Explore other programs
Competition in AI Foundation Models
Meta’s Open Loop program is excited to have launched its first policy prototyping program in the United Kingdom, which is focused on testing the Competition and Markets Authority (CMA) AI Principles to ensure that they are clear, implementable and effective at guiding the ongoing development and use of AI Foundation Models, while protecting competition and consumers.
Generative AI Risk Management
Meta’s Open Loop launched its first policy prototyping research program in the United States in late 2023, focused on testing the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0. This program gave participating companies the opportunity to learn about NIST's AI RMF and subsequent “Generative AI Profile” (NIST AI 600-1), and to understand how this guidance can be applied to developing and deploying generative AI systems. At the same time, the program gathered evidence on current practices and provided valuable insights and feedback to NIST, which can inform future iterations of the RMF and Gen AI profile.
Privacy-Enhancing Technologies
The Open Loop Brazil program was launched in tandem with a twin policy prototyping program in Uruguay, with the aim of guiding and enabling companies in Brazil to leverage and apply privacy-enhancing technologies (PETs) to help deidentify data and mitigate privacy-related risks.
Human-centric AI
The Open Loop India program was a collaborative effort between Meta, ArtEZ University of the Arts and The Dialogue, to develop a stakeholder engagement framework that operationalizes the principle of human-centered AI.
AI Impact Assessment
This program aimed to develop and test a risk assessment framework, called ADIA (Automated Decision Impact Assessment), for AI applications deployed in Europe.
Privacy-Enhancing Technologies
The Open Loop Uruguay program was launched in tandem with a twin policy prototyping program in Brazil, with the aim of guiding and enabling companies in Uruguay to leverage and select privacy-enhancing technologies (PETs) to help de-identify data and mitigate privacy-related risks.
GET INVOLVED
Do you have innovative ideas on how to govern emerging technologies?
Do you want to co-develop and test new policy ideas?
We want to hear from you!