FacebookLinkedInTwitter
European Union

Artificial Intelligence Act

The EU AI Act program is the largest policy prototyping initiative to date, engaging over 60 participants from more than 50 companies developing AI and ML products. The program was structured into three pillars, each focusing on key articles of the EU proposal and assessing and scrutinizing them. The first pillar focused on operationalizing the "Requirements for AI Systems," aimed at understanding how feasible it was for companies to implement these requirements. The second pillar revolved around regulatory sandboxes, assessing their attractiveness and effectiveness. The third pillar aims to present an alternative taxonomy of AI actors regarding the efficacy and suitability of the provider-user paradigm. All these efforts were in line with the overarching goals of the program: providing a basis for consensus-based standards and guidelines, making the EU AI Act clearer, more operational, and technically feasible, and contributing evidence-based inputs to inform the negotiations and final drafting of the EU AI Act.

The Open Loop EU AI Act program focused on testing the effectiveness of the first version of the EU AI Act (2021). We engaged experts and companies to comprehensively test key provisions of the EU AI Act, with the goal of enhancing clarity, technical feasibility, and overall effectiveness.

The EU AI Act program is the largest policy prototyping initiative to date, engaging over 60 participants from more than 50 companies developing AI and ML products. The program was structured into three pillars, each focusing on key articles of the EU proposal and assessing and scrutinizing them. The first pillar focused on operationalizing the "Requirements for AI Systems," aimed at understanding how feasible it was for companies to implement these requirements. The second pillar revolved around regulatory sandboxes, assessing their attractiveness and effectiveness. The third pillar aims to present an alternative taxonomy of AI actors regarding the efficacy and suitability of the provider-user paradigm. All these efforts were in line with the overarching goals of the program: providing a basis for consensus-based standards and guidelines, making the EU AI Act clearer, more operational, and technically feasible, and contributing evidence-based inputs to inform the negotiations and final drafting of the EU AI Act.

The Open Loop EU AI Act program focused on testing the effectiveness of the first version of the EU AI Act (2021). We engaged experts and companies to comprehensively test key provisions of the EU AI Act, with the goal of enhancing clarity, technical feasibility, and overall effectiveness.

The Open Loop EU AI Act program, supported by Meta, led to valuable policy recommendations for improving the EU AI Act. The program produced 5 reports covering diverse aspects of AI governance and offering comprehensive insights.

REPORT 1

The Open Loop EU AI Act program, supported by Meta, led to valuable policy recommendations for improving the EU AI Act. The program produced 5 reports covering diverse aspects of AI governance and offering comprehensive insights.

REPORT 2

The second report introduces an alternative taxonomy for AI actors, providing a more accurate representation of the AI ecosystem.

REPORT 3

The third report delves into AI regulatory sandboxes, emphasizing clear goals, technical expertise, and transparency to ensure their success in fostering innovation and compliance.

REPORT 4

The fourth report analyzes risk and transparency, suggesting collaboration between stakeholders, ongoing monitoring, and human oversight to operationalize AI Act requirements. It also recommends a modular approach to provide instructions for AI system outputs and options for documenting AI system development.

REPORT 5

The fifth report highlights the importance of transparency obligations, proposing user-centric notification systems and providing users with more information about AI decision-making processes.

Main Findings & Recommendations

We have seen first-hand insights through our Open Loop program on the EU AI Act how industry and policymakers can cooperate to advance recommendations and proposals which aim to ensure that regulation is clear and feasible, and how it can support developers in the responsible design, development and deployment of trustworthy AI systems.  Participants delved into critical areas such as risk management, data quality, technical documentation, transparency obligations, and regulatory sandboxes, providing valuable insights and suggesting recommendations for refining the AI Act.
 
The majority of participants found the selected provisions clear and feasible, aligning with the legislator's goals of building and deploying trustworthy AI. However, areas for improvement were identified, highlighting the need for ongoing refinement to avoid hindrances and promote the uptake of AI in Europe.  The program underscored the importance of precise AI categorization, the need to establish clear responsibilities along the AI value chain, the need for detailed guidance for effective AI risk assessment, paving the way for more accountable AI governance and offered insights for the design of AI regulatory sandboxes. 

The resulting policy recommendations underscored the necessity for a nuanced understanding of AI actors, effective risk assessment guidance, practical data quality requirements, clear technical documentation guidance, and targeted transparency measures. 

Many of the key recommendations from the Open Loop program around clarifying definitions, providing implementation guidance, tailoring obligations by risk level, and establishing sandboxes were incorporated into the consolidated AI Act of 2024. The Act reflects several priorities outlined in the Open Loop reports.

Partners & Observers

For this Open Loop program, we are partnering with the Malta Digital Innovation Authority and the Government of Estonia, while being supported by Considerati and Hyve Innovate.

GET INVOLVED

Do you have innovative ideas on how to govern emerging technologies?
Do you want to co-develop and test new policy ideas?

We want to hear from you!

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.