AI Transparency & Explainability


Open Loop launched a policy prototyping program in collaboration with our regional partner C Minds’ Eon Resilience Lab and the Inter-American Development Group (IDB) with the support of Mexico’s National Institute for Transparency, Access to Information and Personal Data Protection (INAI). The program consists of working with a group of Mexican companies that use AI as part of their product or service to test the prototype before emitting public policy recommendations to the INAI as input for them to present a governance framework for transparency and explainability in AI systems, among other documents.

We partner up with 10 companies for the testing and will again adopt a multi-month prototyping approach, for which we currently are in the process of finalizing the best methodological fit.

More information will be coming soon on our Open Loop hub; in the meantime take a look at further program details we published together with C Minds’ Eon Resilience Lab here.


In the case of Mexico, the “Public Policy Prototype on the Transparency and Explainability of Artificial Intelligence Systems” (hereinafter AI systems will be referred to as AI/ADM systems, so as to also refer to Automated Decision-Making (ADM) systems, maintaining tech neutrality in light of possible future technology developments was carried out by Meta and C Minds’ Eon Resilience Lab, in collaboration with the Inter-American Development Bank (IDB), through its fAIr LAC initiative, and with support from Mexico’s National Institute of Transparency, Access to Information and Personal Data Protection (INAI), together with the industry and thematic experts.

The purpose of this program was to design a governance framework and a practical manual (playbook) that outlines the principles of transparency and explainability (T&E). These documents (policy prototype) were tested by Mexican companies that utilize AI/ADM systems to provide goods or services. The overall policy aim was to strengthen responsible AI in Mexico, focusing on T&E.

This exercise aimed to ensure that people know when they are interacting with an AI/ADM system and understand its limitations and capabilities, as well as how it achieves specific results.


En el caso de México, el “Prototipo de política pública sobre transparencia y explicabilidad de los sistemas de Inteligencia Artificial” (en adelante se hará referencia a los sistemas de IA como sistemas de A/TDA; ya que también nos referiremos a los sistemas de Toma de Decisiones Automatizadas (TDA), manteniendo la neutralidad tecnológica ante posibles desarrollos tecnológicos futuros)1 fue realizado por Meta y el Eon Resilience Lab de C Minds, en colaboración con el Banco Interamericano de Desarrollo (BID), a través de su iniciativa fAIr LAC y con el apoyo del Instituto Nacional de Transparencia.

Acceso a la Información y Protección de Datos Personales Protección de Datos Personales (INAI) de México, así como con la industria y personas expertas en el tema. El objetivo de este programa fue diseñar un marco de gobernanza y un manual práctico (playbook) que esboza los principios de transparencia y explicabilidad (TyE). Estos documentos (prototipo de política) fueron probados por empresas mexicanas que utilizan sistemas de A/TDA para suministrar bienes o servicios.

El objetivo general de la política era fortalecer el uso de la IA responsable en México, centrándose en la TyE.

Este ejercicio tenía como objetivo garantizar que las personas sepan cuándo están interactuando con un sistema de A/TDA y comprender sus limitaciones y capacidades, así como la forma en que logra resultados específicos.