Singapore launches AI Verify, worlds first AI Governance Testing Framework and Toolkit
May 26, 2022 |
Artificial Intelligence (“AI”) is revolutionising the way we consume, the way work is done, the way things are built. The productivity gains have been extraordinary. It also poses significant public policy challenges. The problems include a lack of transparency in decision making, the skewed results with potentially poor quality algorithms and the “black box” effect where the path of reasoning is obscured or completely unknown. And it can have a dystopian potential, skewing results against minorities for example. That is a problem with facial recognition technology and predictive analytics in insurance and criminal investigations. All of those matters concern the public. There is a dearth of regulation for the good reason that legislatures are not sure how to properly regulate without harming the positive potential of AI.
The Singapore Privacy Commissioner has launched AI Verify – An AI Governance Testing Framework and Toolkit. It is ostensibly designed to allow companies to demonstrate responsible AI. It is a voluntary scheme. It is certainly a step in the right direction.
The press release by the Infocomm Media Development Authority, Singapore launches world’s first AI testing framework and toolkit to promote transparency; Invites companies to pilot and contribute to international standards development provides:
Singapore launches A.I. Verify – the world’s first AI Governance Testing Framework and Toolkit for companies who want to demonstrate responsible AI in an objective and verifiable manner. This was announced by Singapore’s Minister for Communications and Information Mrs Josephine Teo at the World Economic Forum Annual Meeting in Davos. A.I. Verify – currently a Minimum Viable Product[1] (MVP), aims to promote transparency between companies and their stakeholders through a combination of technical tests and process checks.
Globally, testing for the trustworthiness for AI systems is an emergent space. As more companies use AI in their products and services, fostering public’s trust in AI technologies remains key in unlocking the transformative opportunities of AI.
Singapore remains at the forefront of international discourse on AI ethics
The launch of A.I. Verify follows Singapore’s launch of the Model AI Governance Framework (second edition) in Davos in 2020, and the National AI Strategy in November 2019. Having provided practical detailed guidance to industry on implementing responsible AI, A.I. Verify is Singapore’s next step in helping companies be more transparent about their AI products and services, to build trust with their stakeholders. A.I. Verify is developed by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC).
Objective and verifiable testing process for industry
Developers and owners can verify the claimed performance of their AI systems against a set of principles through standardised tests. A.I. Verify packages a set of open-source testing solutions together, including a set of process checks into a Toolkit for convenient self-assessment. The Toolkit will generate reports for developers, management, and business partners, covering major areas affecting AI performance. The approach is to allow transparency, of what the AI model claims to do vis-à-vis the test results, and covers areas such as:
-
- Transparency:
- On the use of AI to achieve what stated outcome
- Understanding how the AI model reaches a decision
- Whether the decisions predicted by the AI show unintended bias
- Safety and resilience of AI system.
- Accountability and oversight of AI systems.
- Transparency:
The Pilot Testing Framework and Toolkit:
-
- Allows AI system developers/owners to conduct self-testing – to maintain commercial requirements while providing a common basis to declare results.
- Does not define ethical standards. It validates AI system developer’s/owner’s claims about the approach, use, and verified performance of their AI systems.
- Does not however guarantee that any AI system tested under this Pilot Framework will be free from risks or biases or is completely safe.
Minister for Communications and Information Mrs Josephine Teo said, “A.I. Verify is another step forward in Singapore’s AI development. In developing the world’s first product to demonstrate responsible AI in an objective and verifiable manner, we aim to help businesses become more transparent to their stakeholders in the A.I use. This will, in turn, promote greater public trust towards the use of AI. We invite industry partners from all around the world to join us in this pilot and contribute to building international standards in AI governance.”
Also commenting on the launch, Mr Chia Song Hwee, Deputy CEO, Temasek International and member of Singapore’s Advisory Council on the Ethical Use of AI and Data said, “I would like to congratulate IMDA for taking responsible AI to the next milestone with the launch of this testing framework and toolkit. Rapid digitisation has led to a proliferation of data and improved algorithms. As companies across sectors continue to innovate, this toolkit will enable them to turn concepts of responsible and trustworthy AI into practical applications that will benefit all stakeholders, from business owners to end users.”
Building interoperability of trustworthy AI with partners and industry
Developed under the guidance of the Advisory Council on the Ethical Use of AI and Data, 10 companies from different sectors and of different scale, have already tested and/or provided feedback. These companies are AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS (Part of Singtel Group)/Land Transport Authority, Standard Chartered Bank, UCARE.AI, and X0PA.AI.
In addition, Singapore is engaging other like-minded countries and partners to enhance the interoperability of AI governance frameworks and to develop international standards on AI such as through Singapore’s participation in ISO/IEC JTC1/SC 42 on Artificial Intelligence. IMDA is also working together with the U.S. Department of Commerce to build interoperable AI governance frameworks.Beyond the pilot stage of the MVP, Singapore aims to work with AI system owners/developers globally to collate and build industry benchmarks. This enables Singapore to continue contributing to the development of international standards on AI governance.
Organisations invited to participate in pilot
As AI governance and testing is nascent, Singapore welcomes organisations to participate in piloting the MVP. Companies participating in the pilot will have the unique opportunity to:
-
- Gain early access to the MVP and use it to conduct self-testing on their AI systems/models;
- Use MVP-generated reports to demonstrate transparency and build trust with their stakeholders; and
- Help shape an internationally applicable MVP to reflect industry needs and contribute to international standards development.
The key AI ethics principles are transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency & oversight, and inclusive growth, societal & environmental well-being.
The aim is to build trust with consumers by demonstrating:
- Transparency on Use of AI & AI systems. By disclosing to individuals that AI is used in the system, individuals will become aware and can make an informed choice of whether to use the AI-enabled system.
- Understanding how an AI model reaches a decision. This allows individuals to know the factors contributing to the AI model’s output, which can be a decision or a recommendation. Individuals will also know that the AI model’s output will be consistent and performs at the level of claimed accuracy given similar conditions.
- Ensuring safety and resilience of AI system. Individuals know that the AI system will not cause harm, is reliable and will perform according to intended purpose even when encountering unexpected inputs.
- Ensuring fairness i.e., no unintended discrimination. Individuals know that the data used to train the AI model is sufficiently representative, and that the AI system does not unintentionally discriminate.
- Ensuring proper management and oversight of AI system. Individuals know that there is human accountability and control in the development and/or deployment of AI systems and the AI system is for the good of humans and society.
The following principles are assessed through technical and process checks:
- Explainability – Assessed through a combination of technical tests and process checks. Technical tests are conducted to identify factors contributing to AI model’s output. Process checks include verifying documentary evidence of considerations given to the choice of models, such as rationale, risk assessments, and trade-offs of the AI model.
- Robustness – Assessed through a combination of technical tests and process checks. Technical tests attempt to assess if a model performs as expected even when provided with unexpected inputs. Process checks include verifying documentary evidence, review of factors that may affect the performance of AI model, including adversarial attacks.
- Fairness (Mitigation of unintended discrimination) – Assessed through a combination of technical tests and process checks. Technical tests check that an AI model is not biased on protected or sensitive attributes specified by the AI system owner, by checking the model output against the ground truth. Process checks include verifying documentary evidence of having a strategy for the selection of fairness metrics that are aligned with the desired outcomes of the AI system’s intended application; and the definition of sensitive attributes are consistent with the legislation and corporate values.
The following principles are assessed through process checks:
- Transparency – Assessed through process checks of documentary evidence (e.g., company policy and communication collaterals) of providing appropriate information to individuals who may be impacted by the AI system. The information includes, under the condition of not compromising IP, safety, and system integrity, use of AI in the system, intended use, limitations, and risk assessment.
- Repeatability/Reproducibility – Assessed through process checks of documentary evidence including evidence of AI model provenance, data provenance and use of versioning tools.
- Safety – Assessed through process checks of documentary evidence of materiality assessment and risk assessment, including how known risks of the AI system have been identified and mitigated.
- Accountability – Assessed through process checks of documentary evidence, including evidence of clear internal governance mechanisms for proper management oversight of the AI system’s development and deployment.
- Human agency and oversight – Assessed through process checks of documentary evidence that AI system is designed in a way that will not reduce human’s ability to make decisions or to take control of the system. This includes defining role of human in its oversight and control of the AI system such as human-in-the-loop, human-over-the-loop, or human-out-of-the-loop.
The structure of the framework has the following components:
- Definitions of AI ethics principles. The Testing Framework provides definitions for each of the AI ethics principles.
- Testable criteria. For every principle, a set of testable criteria will be ascribed. Testable criteria are a combination of technical and non-technical (e.g., processes and organisational structure) factors contributing to the achievement of the desired outcomes of that governance principle.
- Testing process. Testing processes are actionable steps to be carried out in order to ascertain if each testable criterion has been satisfied. The testing processes could be quantitative such as statistical tests and technical tests. They can also be qualitative such as producing documented evidence during process checks.
- Metrics. These are well-defined quantitative or qualitative parameters that can be measured, or the presence of evidence can be demonstrated for each testable criterion.
- Thresholds (where applicable). As AI technologies are rapidly evolving, thresholds that define acceptable values or benchmarks for the selected metrics (whether defined by industry or by regulators) often do not exist. Hence, thresholds are not available in the current version of Testing Framework. However, we aim to collate and develop meaningful and context-specific metrics and thresholds as industry test their AI systems against the Testing Framework.
The Toolkit is designed to:
- provide a user interface to guide users step by step in the testing process, including a guided fairness tree to guide users to the fairness metrics relevant for their use case;
- supports certain binary classification and regression models that use tabular data, such as decision trees and random forest algorithms;
- produce a basic summary report to help system developers and owners interpret the results of the tests;
- intend to be deployed in the user’s environment and is packaged into a Docker container which allows for easy deployment.