Responsible AI Principles through Stakeholder Engagement
The Open Loop India program was a collaborative effort between Meta, ArtEZ University of the Arts and The Dialogue, to develop a stakeholder engagement framework that operationalizes the principle of human-centered AI. The program involved working with a group of 12 Indian companies that use AI to test the prototype and provide feedback. The overarching goal was to provide evidence on the role of stakeholder engagement across the AI lifecycle for informing company implementation of human-centered AI.
Deployment Period | January - August 2023
Read the report now!
This report presents the findings and recommendations of the Open Loop India program. Through desk research, interviews, surveys and workshops, the policy prototyping program investigated:
PROGRAM DETAILS
Main Findings & Recommendations
The program’s outcomes resulted in several notable recommendations for policymakers, companies, and investors, to support stakeholder engagement across the AI lifecycle for informing company implementation of human-centered AI. Recommendations to policymakers include:
Developing guidance on stakeholder engagement for AI actors, focusing on the entire AI lifecycle. Guidance should be voluntary, accounting for the nuances across sectors, AI use cases, and risk profiles.
Promoting interoperability and synergies by designing guidance for AI stakeholder engagement that integrates with other key AI risk management frameworks and standards.
Catalyzing capacity building and knowledge sharing by establishing dedicated innovation funds and non-monetary benefits for companies that demonstrate a commitment to stakeholder engagement proactively, and by building a vibrant ecosystem for continuous learning.
Ensuring accountability and ecosystem enablers by proactively leading by example in stakeholder engagement in public AI initiatives.
Partners & Observers
The Open Loop India program was a collaborative effort between Meta, ArtEZ University of the Arts and The Dialogue.
Explore other programs
Competition in AI Foundation Models
Meta’s Open Loop program is excited to have launched its first policy prototyping program in the United Kingdom, which is focused on testing the Competition and Markets Authority (CMA) AI Principles to ensure that they are clear, implementable and effective at guiding the ongoing development and use of AI Foundation Models, while protecting competition and consumers.
Generative AI Risk Management
Meta’s Open Loop program is excited to be launching its first policy prototyping program in the United States, which is focussed on the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0. The program will give consortium participants the opportunity to explore how the NIST AI RMF could help manage risks while developing and deploying Generative AI systems. At the same time, the program will seek to provide valuable insights and feedback to NIST as they work on future iterations of the RMF.
Artificial Intelligence Act
The EU AI Act program is the largest policy prototyping initiative to date, engaging over 60 participants from more than 50 companies developing AI and ML products. The program was structured into three pillars, each focusing on key articles of the EU proposal and assessing and scrutinizing them.
Privacy-Enhancing Technologies
The Open Loop Brazil program was launched in tandem with a twin policy prototyping program in Uruguay, with the aim of guiding and enabling companies in Brazil to leverage and apply privacy-enhancing technologies (PETs) to help deidentify data and mitigate privacy-related risks.
AI Impact Assessment
This program aimed to develop and test a risk assessment framework, called ADIA (Automated Decision Impact Assessment), for AI applications deployed in Europe.
Privacy-Enhancing Technologies
The Open Loop Uruguay program was launched in tandem with a twin policy prototyping program in Brazil, with the aim of guiding and enabling companies in Uruguay to leverage and select privacy-enhancing technologies (PETs) to help de-identify data and mitigate privacy-related risks.
GET INVOLVED
Do you have innovative ideas on how to govern emerging technologies?
Do you want to co-develop and test new policy ideas?
We want to hear from you!