Artificial Intelligence Act
Overview
On June 10th 2022, we launched the Open Loop program focused on the EU AI Act proposed by the European Commission on April 21st 2021. The program involves over 50 participating companies that tested key requirements of the proposal in order to make them clearer, more technically feasible, and more effective.
The policy prototyping program is structured into three distinct, yet intertwined parts, each tackling specific themes and sections of the EU proposal, namely:
The policy prototyping program consists of:
-
An inception phase, with 62 participants from over 50 AI startups and companies operating in the EU sharing their feedback on key articles of the EU proposal, commenting on their clarity, feasibility and cost-effectiveness. Inputs were gathered via the Open Loop Forum, an online platform for discussion around a variety of topics divided into 7 activities:
-
A deep dive phase, where selected companies tested the implementation of some of the requirements listed in articles 9, 13, 14 and Annex IV of the AI Act, with a focus on the transparency measures to ensure the interpretability of the output and on the requirement for providers of high-risk AI systems to have a risk management system in place. As part of this phase, the program also tested the provision on transparency obligations for AI systems interacting with natural persons (article 52), the section on regulatory sandboxes (article 53) and the taxonomy of AI actors (article 3) presented in the AI Act.
-
A co-creation phase, where participating companies, observers and partners from industry, governmental entities, regulatory authorities, academia, and other non-governmental organizations convene in workshops and policy jams to discuss the results of the first phases, and contribute to the design of alternative taxonomies.
The outcomes of this policy prototype will result in a series of reports with actionable and evidence-based policy recommendations.
The program contributes to:
- The improvement and refinement of key concepts, provisions and processes outlined in the AI Act, in particular those related to its technical and procedural requirements.
- Making the AI Act clearer, more operational and technically feasible.
- The creation of consensus-based standards and guidelines, thanks to the development of common technical specifications and codes of conduct for compliance with the AI Act technical requirements.
- Informing in a timely manner the negotiations and final drafting of the AI Act through evidence-based inputs.
Through a series of surveys, moderated discussions and interactive workshops, the program deploys qualitative and quantitative methods and involves industry partners, EU Institutions, governmental entities, regulatory authorities, academics, and other non-governmental organizations.
Publication
The report presents the findings and recommendations of the first part of the Open Loop’s policy prototyping program on the European Artificial Intelligence Act (AIA), which was rolled out in Europe from June 2022 to July 2022, in partnership with Estonia’s Ministries of Economic Affairs and Communications and Justice, and Malta’s Digital Innovation Authority (MDIA).
We enlisted 53 AI companies to participate in the Open Loop Forum (OLF), a dedicated online platform where they met to discuss topics and complete several research-related tasks.
The overall picture is that most of the AIA provisions addressed in this program are clear, feasible and may contribute to the overall goal of creating trustworthy AI. However, there are several areas in the AI Act where there is room for improvement, and some provisions that might even undermine another goal of the legislator: the uptake of AI in Europe.
Publication
In this report, we explore the efficacy of the taxonomy of AI actors in the EU Artificial Intelligence Act (AIA) (e.g., provider, user, and importer), proposing an alternative for the taxonomy of AI actors currently included in the proposal. This research is part of the Open Loop Program of the EU AIA.
The question we pose is whether the taxonomy of AI actors in the AIA is effective and, if not, what an alternative taxonomy would look like. Our hypothesis is that the current taxonomy of AI actors in the AIA does not accurately reflect the AI market, and this may lead to issues in assigning responsibilities for market actors and apportioning liability.
To test our hypothesis and address our research questions, we surveyed AI companies in our Open Loop Forum (OLF), conducted expert interviews, and performed desk research.
Based on the information gathered, we conclude that the existing taxonomy does not accurately reflect the actors in the AI ecosystem. In particular, roles such as the subject, third-party service providers, and data providers seem to be missing from the AIA’s text.
Publication
This report, which is part of the Open Loop Program on the EU Artificial Intelligence Act (AIA), explores the AI regulatory sandbox provision described in article 53 of the AIA.1
More specifically, we exploredthe goals of the AI regulatory sandbox and the conditions necessary to achieve them. In particular, we sought answers to the following research questions (RQs):
- RQ1: What are the objectives of the EU AI regulatory sandbox?
- RQ2: What conditions are necessary to achieve the objectives of the EU AI regulatory sandbox?
- RQ3: Does article 53 of the AIA enable the necessary conditions for a successful EU AI regulatory sandbox?
- RQ4: Are there alternative governance mechanisms to achieve the objectives of the EU AI regulatory sandbox?
To answer the RQs, we collected data from three different sources: desk research, interviews with experts, and a Sandbox Policy Design Jam. This mixed-method approach allowed us to triangulate the data and address the four RQs from various perspectives.
Publication
As part of the Open Loop program on the Artificial Intelligence Act, we tested one of the requirements of Article 52(a) (on transparency obligations for AI systems interacting with individuals) of the proposed regulation to assess when and how individuals should be informed when they are interacting with an AI system.
We conducted an online survey with a sample of 469 participants from five European countries (Spain, France, Germany, United Kingdom and Sweden). Via this online survey, participants were exposed to videos of two different AI-powered systems, a chatbot and a news app. Different styles of AI notifications were presented to the participants:
- no notification
- content-integrated notification and
- notification banner
The survey showed that 30% of respondents did not notice the banner notification, and 49% failed to notice and comprehend the content-integrated notification. People with prior experience with AI systemswere more likely to perceive and understand notifications. Participants’ understanding of a notification did not significantly affect their sense of control and trust in the tested AI applications
Publication
This report, which is part of the Open Loop Program on the EU Artificial Intelligence Act (AIA), presents the findings of a policy prototyping exercise on risk management and transparency in the AIA.
The objective of this Deep Dive was to assess the clarity and feasibility of selected requirements from the perspective of participating AI companies.
Valuable insights gathered from participating companies have highlighted areas that require improvement, which are discussed in two parts:
- i) transparency and human oversight and
- ii) risk management requirements in the AIA.
Partners
For this Open Loop program, we are partnering with the Malta Digital Innovation Authority and the Government of Estonia, while being supported by Considerati and Hyve Innovate.
Participating companies
Observers
Members of European Parliament
-
Eva Maydell
ITRE Rapporteur of the AI Act, (BG, EPP) -
Ivan Štefanec
President of SME Europe (SK, EPP)
International Institutions / Governmental Authorities
-
Karine Perset,
Head of AI Unit and OECD.AI, Organisation for Economic Co-operation and Development (OECD) -
Andras Hlacs,
AI Policy Analyst, OECD.AI, Organisation for Economic Co-operation and Development (OECD) -
Richard Nevinson,
Head of Digital Economy, Information Commissioner’s Office (ICO) -
Henrik Trasberg,
Legal Advisor on AI and New Technologies, Ministry of Justice of Estonia -
Ott Velsberg,
Government Chief Data Officer, Estonian Ministry of Economic Affairs and Communications -
Kenneth Brincat, CEO,
Malta Digital Innovation Authority (MT) -
Alessandro Fusacchia,
Member of the Italian Parliament, Coordinator of Parliamentary Intergroup on AI -
Luca Carabetta,
Member of the Italian Parliament -
Zümrüt Müftüoğlu
Expert – Digital Transformation Office of the Presidency of the Republic of Türkiye -
Işıl Selen Denemeç
Head of Legal Department – Digital Transformation Office of the Presidency of the Republic of Türkiye
Academia / Think Tanks
-
Andrea Bertolini,
EURA Centre, Scuola Superiore Sant’Anna Pisa -
Eduard Fosch,
Leiden University -
Evert Stamhuis,
DIGOV Centre Rotterdam University -
Fabiana di Porto,
University of Salento and LUISS University -
Giovanni Sartor,
University of Bologna and European University Institute -
Johann Laux,
Oxford Internet Institute -
Klaus Heine,
Professor, DIGOV Centre Rotterdam University -
Virginia Dignum,
Umeå University -
Nicolaos Voros,
University of the Peloponnese -
Joshua Ellul,
University of Malta and Former chairman, Malta Digital Innovation Authority -
David Osimo,
Lisbon Council -
Risto Uuk,
Future of Life Institute