The proposed standard was recently discussed at an ISO plenary session with more than 35 national bodies and over 250 AI experts from around the world.
At a glance
Who: Singapore IMDA; Enterprise Singapore (EnterpriseSG).
What: Singapore has put forward a new international standard for the testing methodology for generative AI systems, aimed at strengthening the foundation for trustworthy AI testing.
Why: With the rapid development and pervasiveness of AI across ecosystems, it is crucial that there are globally recognised AI standards to ensure AI is used in a reliable and safe way.
Where: The standard was discussed at the 17th ISO/IEC JTC 1/SC 42 plenary meeting, held in Singapore on 20-24 April.
Singapore has put forward a new international standard for the testing methodology for generative AI (genAI) systems, aimed at strengthening the foundation for trustworthy AI testing.
This is the first international standard of its kind for the testing of genAI systems and was discussed at the 17th ISO/IEC JTC 1/SC 42 plenary meeting, recently held in Singapore.
Co-organised by the Infocomm Media Development Authority (IMDA) and Enterprise Singapore (EnterpriseSG) and hosted in the Asean region for the first time, the bi-annual plenary will gather more than 35 national bodies and over 250 AI experts from around the world, including the US, UK, China, Japan, Germany, France, and the Republic of Korea.
With the rapid development and pervasiveness of AI across ecosystems, it is crucial that there are globally recognised AI standards to ensure AI is used in a reliable and safe way. Specifically, Singapore has put forth a new ISO/IEC 42119-8 standard with a focus on benchmarking and red teaming (where ethical hackers simulate cyberattacks) methodologies for genAI systems with standardised testing approaches.
With the rapid development and pervasiveness of AI across ecosystems, it is crucial that there are globally recognised AI standards
Overall, it establishes an important framework for AI testing that enhances the reproducibility and comparability of results. This, in turn, aims to drive assurance and overall trust in AI systems and enable safer, more reliable adoption by AI deployers and users.
The ISO/IEC 42119-8 builds on IMDA’s past work in developing domestic testing frameworks, including its AI Verify Toolkit and the Starter Kit for Testing of LLM-Based Applications for Safety and Reliability and in the nascent field of assurance through the Global AI Assurance Sandbox.
It is part of Singapore’s broader commitment to international AI standards, as seen in the national adoption and accreditation programme of ISO/IEC 42001 led by EnterpriseSG, and the contribution of real-world use cases to support ISO/IEC TR 24030’s documentation of AI applications in practice. Together, these efforts aim to lay the groundwork for trustworthy AI implementation.
IMDA and EnterpriseSG hosted a series of capacity building initiatives on the sidelines of the plenary meeting:
Singapore is committed to developing a trusted AI ecosystem through its AI Safety Institute (AISI), leadership in the ASEAN Working Group on AI Governance (WG-AI), and its role as lead ASEAN member for the AI Priority Area under the ASEAN Digital Trade Standards and Conformance Working Group (DTSCWG). This close cooperation advances its shared goal of ensuring AI is built and deployed in a trustworthy manner for individuals and enterprises alike.