The Model AI Governance Framework for Agentic AI provides guidance on how to deploy agents responsibly, recommending technical and non-technical measures.

At a glance
Who: Singapore IMDA.
What: Singapore IMDA has launched the Model AI Governance Framework for Agentic AI to provide guidance to organisations on how to deploy agents responsibly, recommending technical and non-technical measures to mitigate risks, while emphasising that humans are ultimately accountable.
Why: To support the responsible development, deployment and use of AI, so that its benefits can be enjoyed by all in a trusted and safe manner. This aligns with Singapore’s practical and balanced approach to AI governance, where guardrails are put in place, while providing space for innovation.
When: The framework was announced at the World Economic Forum Annual Meeting in Davos last week. IMDA intends this as a living document, and welcomes all feedback from interested parties to refine the framework.
Singapore has launched a model for artificial intelligence (AI) governance guide to help enterprises deploy agentic AI responsibly. It announced the model at the World Economic Forum Annual Meeting in Davos last week.
Developed by the Infocomm Media Development Authority (IMDA), the framework for reliable and safe agentic AI deployment seeks to build upon the governance foundations of the Model Governance Framework( MGF) for AI, which was introduced in 2020.
The Model AI Governance Framework for Agentic AI provides guidance to organisations on how to deploy agents responsibly, recommending technical and non-technical measures to mitigate risks, while emphasising that humans are ultimately accountable.
According to IMDA, initiatives such as the MGF for Agentic AI support the responsible development, deployment and use of AI, so that its benefits can be enjoyed by all in a trusted and safe manner. This is in line with Singapore’s practical and balanced approach to AI governance, where guardrails are put in place, while providing space for innovation.
Unlike traditional and generative AI, AI agents claim to be able to reason and take actions to complete tasks on the behalf of users. This allows organisations to automate repetitive tasks, such as those related to customer service and enterprise productivity, and drive sectoral transformation by freeing up employees’ time to undertake higher value activities.
“As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance for agentic AI”
However, as AI agents may have access to sensitive data and the ability to make changes to their environment, such as updating a customer database or making a payment, their use introduces potential new risks, for example unauthorised or erroneous actions.
IMDA claims the MGF for Agentic AI offers a structured overview of the risks of agentic AI and emerging best practices in managing these risks. It is targeted at organisations looking to deploy agentic AI, whether through developing AI agents in-house or using third-party agentic solutions.
In developing the framework, IMDA incorporated feedback from both the Government sector agencies and private sector organisations.
“As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance for agentic AI,” said April Chin, co-chief executive officer, Resaro, an AI assurance company specialising in independent, third-party testing of mission-critical AI systems.
“The framework establishes critical foundations for AI agent assurance. For example, it helps organisations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails.”
“The MGF helps organisations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails”
The framework provides organisations with guidance on technical and non-technical measures they need to put in place to deploy agents responsibly, across four dimensions:
IMDA intends this as a living document, and welcomes all feedback from interested parties to refine the framework, as well as submission of case studies that demonstrate how agentic AI can be responsibly deployed. Building on the “Starter Kit Testing of LLM-Based Applications for Safety and Reliability”, IMDA is also developing guidelines for testing agentic AI applications.
Why not try these links to see what our SmartCitiesWorld AI can tell you.
(Please note this is an experimental service)
How does the framework ensure human accountability in agentic AI deployment?What technical controls are recommended for managing agentic AI risks?How can organisations define and limit agentic AI autonomy effectively?In what ways does the framework support transparency for end-users?How does the framework balance AI innovation with responsible governance?