Let’s Unlock

Get inspired to innovate policy by following the progress of other program participants

icon-lets-unlock

An opportunity to inform regulation

icon-globe

 

What is Open Loop up to right now?

Explore the Open Loop programs from across the globe

Europe

AI Impact Assessment

Overview

We partnered with 10 European AI businesses (based in or holding key operations in Europe/across the EU) to co-create and test an AI risk assessment framework on different AI applications. – We called that assessment “ADIA”, Automated Decision Impact Assessment.

Participating companies were asked to select an (in-house) AI/ML application that would produce effects or have an impact on people, to simulate the application of the ADIA framework on that particular application. We adopted a design sprint-inspired four week prototyping methodology, focusing on the participants’ journey when simulating risks and thereby exploring implications for policy understanding, policy effectiveness, and policy costs.

The results of this initial policy prototyping program clearly demonstrate that performing such assessments in practice is a valuable tool for companies to identify and mitigate risks from AI/ADM (automated decision-making) systems.

Recommendations

  • Focus on procedure instead of prescription as a way to determine high risk AI applications
  • Provide specific and detailed guidance on how to implement an ADIA process, and release it alongside the law
  • Be as specific as possible in the definition of risks within regulatory scope
  • Improve documentation of risk assessment and decision-making processes by justifying the selection of mitigation measures
  • Develop a sound taxonomy of the different AI actors involved in risk assessment
  • Specify, as much as possible, the set of values that may be impacted by AI/ADM and provide guidance on how they may be in tension with one another
  • Don’t reinvent the wheel; combine new risk assessment processes with established ones to improve the overall approach.
  • Leverage a procedural risk assessment approach to determine what is the right set of regulatory requirements to apply to organisations deploying AI applications (instead of applying all of them by default)

Publication

AI Impact Assessment: A Policy Prototyping Experiment

This report presents the findings and recommendations of the Open Loop’s policy prototyping program on AI Impact assessment, which was rolled out in Europe from September to November 2020.

As the report outlines, the results of Open Loop’s first policy prototyping experiment were very promising. Based on feedback from the companies we collaborated with, our prototype version of a law requiring AI risk assessments, combined with a playbook for how to implement it, was clearly valuable to the participants as a tool for identifying and mitigating risks from their AI applications that they may not have addressed otherwise.

The experiences of our partners highlighted how this sort of risk assessment approach can inform a more flexible, practicable, and innovative method to assessing and managing AI risks compared to more prescriptive policy approaches.

Partners

facebook

Partner for methodology & content

considerati

Participants

unbabel

rogervoice

riatlas

reface

keepler

naix_technology

irida_labs

feedzai

evo_pricing

allegro

Americas

AI Transparency & Explainability – Mexico

Overview

Open Loop is also launching a policy prototyping program in collaboration with our regional partner C Minds and the Inter-American Development Group (IDB) with the support of Mexico’s National Institute for Transparency, Access to Information and Personal Data Protection (INAI). The program consists of working with a group of Mexican companies that use AI as part of their product or service to test the prototype before emitting public policy recommendations to the INAI as input for them to present a governance framework for transparency and explainability in AI systems, among other documents.

We foresee to partner up with 14 companies for the testing and will again adopt a multi-month prototyping approach, for which we currently are in the process of finalizing the best methodological fit.

More information will be coming soon on our Open Loop hub; in the meantime take a look at further program details we published together with Open Loop’s regional partner C Minds here.

Partners

facebook

c_minds

idb

idb_lab

Participants

Asia Pacific

AI Transparency & Explainability – Singapore

Overview

We are currently collaborating with the Singaporean InfoComm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), alongside Facebook and local partners, to test specific concepts, processes, and guidance on AI explainability and transparency.

12 AI companies from the Asia-Pacific region are participating in this six-month prototyping program. For the duration of the program, companies are developing AI explainability solutions for their products/services in accordance with guidance from the policy prototype and are sharing insights about that process. The program is implemented through a series of dynamic scenarios built and personalized to each participating company.

Program aim is to test and improve Singapore’s AI governance frameworks (Model AI Governance Framework and Implementation and Self-Assessment Guide for Organisations (ISAGO)) in the field of AI/ML explainability and contribute to its wider adoption, while simultaneously providing practical insights on how companies could develop and deploy AI systems with regards to explainability.

  • Facebook
  • Basis.AI
  • AI Singapore
  • Singapore's InfoComm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC)

Participants

 

  • Bukalapak (Indonesia)
  • Deloitte (Singapore)
  • Evercomm (Singapore)
  • Facebook (US)
  • Halosis (Indonesia)
  • Ngee Ann Polytechnic (Singapore)
  • Nodeflux (Indonesia)
  • Qiscus (Singapore)
  • Qsearch (Taiwan)
  • Trabble (Singapore)
  • Travelflan (Hong Kong)
  • Traveloka (Indonesia)