Privacy-Enhancing Technologies
The Open Loop Uruguay program was launched in tandem with a twin policy prototyping program in Brazil, with the aim of guiding and enabling companies in Uruguay to leverage and select privacy-enhancing technologies (PETs) to help de-identify data and mitigate privacy-related risks. In this initiative, ten organizations in Uruguay engaged in testing a prototype PETs Playbook devised to help organizations connect data protection expectations with the selection of suitable PETs. This program was led by Meta’s Open Loop team, the Eon Resilience Lab of C Minds, in collaboration with the Agencia de Gobierno Electrónico y Sociedad de la Información y del Conocimiento (AGESIC) and the Unidad Reguladora y de Control de Datos Personales (URCDP) of Uruguay and the Inter-American Development Bank.
Deployment Period | September 2022 - April 2023
Read the report now!
This report presents the findings and recommendations of the Open Loop Uruguay program. Through desk research, interviews, surveys and workshops, the policy prototyping program investigated:
Download the report:
PROGRAM DETAILS
Main Findings & Recommendations
The program’s outcomes resulted in several notable recommendations to guide and enable companies in Uruguay to leverage and select privacy-enhancing technologies, including:
A flexible, risk-based approach to anonymization
Measuring the level of risk should be a fact-specific assessment and should focus on whether parties who might realistically get access to the data could re-identify the data.
Processing data
Policymakers should clarify that entities can process data for the purpose of reducing the risk of identifiability.
Advancing multi-stakeholder dialogues
Not only could these conversations help to build entities’ capacities to deploy PETs, but they could also make progress on developing a shared understanding of PETs.
Direct investment in PETs research and development
Policymakers could also fund R&D into open-source PETs implementations, which could be more readily used off-the-shelf by small and medium entities.
Regulatory sandboxes
Policymakers are encouraged to explore the above topics more thoroughly through regulatory sandboxes.
Partners & Observers
This program was led by Meta’s Open Loop team, the Eon Resilience Lab of C Minds, in collaboration with the Agencia de Gobierno Electrónico y Sociedad de la Información y del Conocimiento (AGESIC) and the Unidad Reguladora y de Control de Datos Personales (URCDP) of Uruguay and the Inter-American Development Bank.
Explore other programs
Competition in AI Foundation Models
Meta’s Open Loop program is excited to have launched its first policy prototyping program in the United Kingdom, which is focused on testing the Competition and Markets Authority (CMA) AI Principles to ensure that they are clear, implementable and effective at guiding the ongoing development and use of AI Foundation Models, while protecting competition and consumers.
Generative AI Risk Management
Meta’s Open Loop program is excited to be launching its first policy prototyping program in the United States, which is focussed on the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0. The program will give consortium participants the opportunity to explore how the NIST AI RMF could help manage risks while developing and deploying Generative AI systems. At the same time, the program will seek to provide valuable insights and feedback to NIST as they work on future iterations of the RMF.
Artificial Intelligence Act
The EU AI Act program is the largest policy prototyping initiative to date, engaging over 60 participants from more than 50 companies developing AI and ML products. The program was structured into three pillars, each focusing on key articles of the EU proposal and assessing and scrutinizing them.
Human-centric AI
The Open Loop India program was a collaborative effort between Meta, ArtEZ University of the Arts and The Dialogue, to develop a stakeholder engagement framework that operationalizes the principle of human-centered AI.
AI Impact Assessment
This program aimed to develop and test a risk assessment framework, called ADIA (Automated Decision Impact Assessment), for AI applications deployed in Europe.
GET INVOLVED
Do you have innovative ideas on how to govern emerging technologies?
Do you want to co-develop and test new policy ideas?
We want to hear from you!