AI Transparency & Explainability
(Singapore)

icon-ai-singapore

Overview

We worked with 12 companies across the APAC region to co-develop and test a policy prototype on AI transparency & explainability (T&E) based on Singapore’s Model AI Governance Framework (MF) as well as its Implementation and Self-Assessment Guide for Organizations (ISAGO).

Through our methodological approach, we captured the experience of participants receiving, handling, and following the policy prototype, testing in this way its clarity, effectiveness and actionability. In particular, we asked the participants to leverage the policy prototype in order to build and deploy AI explainability solutions in practice, in the context of their specific products and services.

As a result, we learned about the tensions and challenges they encountered when delving into this technical endeavor, capturing 4 important tradeoffs: T&E vs. security; T&E vs effectiveness / accuracy; T&E vs disclosure of potential IP issues; and T&E vs meaningfulness and actual understanding. When tasked with building an interface design for their AI explainability solution, participants also shared important technical, policy and usability considerations that we documented in this report.

Recommendations

  • Get practical: develop best practices on assessing the added value of XAI for companies and calculating its estimated implementation cost
  • Connect the dots: create new or leverage existing toolkits, certifications and educational, training modules to ensure the practical implementation of XAI policy goals
  • Test and experiment: demonstrate the value and realize the potential of policy experimentation
  • Get personal: make XAI policy guidance more personalized and context-relevant
  • Get creative together: explore new interactive ways to co-create and disseminate policy, and increase public-private collaboration

Publication

This report presents the findings and recommendations of the Open Loop’s policy prototyping program on AI Transparency and Explainability, which was rolled out in the Asia-Pacific region fromApril 2020 to March 2021, and in partnership with Singapore’s Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC).

To do so, we designed and deployed the program to achieve the following goals:

  • Test Singapore’s AI governance framework and accompanying guide (MF and ISAGO) in the field of AI T&E, with a focus on AI explainability, for policy clarity, effectiveness and actionability.
  • Make recommendations to improve specific XAI elements of Singapore’s AI governance framework and accompanying guide, and contribute to their wider adoption.
  • Provide clarity and guidance on how companies can develop explanations for how their specific products and services leverage AI/ML to produce decisions, recommendations or predictions (XAI solutions).
  • Showcase best XAI practices and offer evidence-based recommendations for AI T&E in the APAC region.

Partners

Meta AI

AI Singapore

AIcadium

Imda

TTC Labs

Craig Walker

Participants

Bukalapak

Deloitte Singapore

Evercomm

Halosis

Meta AI

Ngee Ann Polytechnic

Nodeflux

Qiscus

Qsearch

Trabble

TravelFlan

Traveloka