Artificial intelligence has moved from a competitive differentiator to a baseline operational tool for Singapore enterprises. Law firms use large language models (LLMs) to draft contracts. Banks deploy AI-driven fraud detection. Government-linked companies run AI chatbots that handle sensitive citizen data. The productivity gains are real. So are the security risks — and most organisations are governing neither effectively.
Singapore's Model AI Governance Framework (MAIGF), first published by IMDA and PDPC in 2019 and updated since, provides a solid policy foundation. But policy intent and operational security are different things. This guide bridges that gap: what the framework requires, what attackers actually do to AI systems, and how to build controls that satisfy both your CISO and your regulator.
Who this is for: CISOs, IT governance leads, and compliance officers at Singapore enterprises deploying AI tools — whether built in-house, vendor-provided, or SaaS-based LLM integrations. Relevant to MAS-regulated entities, CSA Cyber Trust Mark applicants, and any organisation subject to the PDPA.
Why AI Governance Is a Security Issue, Not Just a Policy Issue
The instinct to treat AI governance as a compliance checkbox — a policy document reviewed annually — is dangerous. AI systems introduce attack surfaces that traditional security frameworks were never designed to address:
- Prompt injection: Attackers embed instructions inside user inputs, document uploads, or external data feeds, causing AI systems to ignore their original instructions and take unintended actions.
- Training data poisoning: If your AI model is fine-tuned on proprietary data, attackers who can influence that data can introduce backdoors or biases.
- Model exfiltration: LLMs can be probed systematically to extract proprietary training data — including personally identifiable information (PII) — through carefully crafted queries.
- Agentic task exploitation: AI agents that can call APIs, write code, or send emails become force multipliers for attackers if their tool access is not properly scoped and monitored.
- Supply chain risk: Most enterprises use AI via APIs (OpenAI, Anthropic, Google Gemini) or SaaS wrappers. Governance must extend to these third parties.
These are not theoretical risks. In 2025 and 2026, researchers demonstrated prompt injection attacks against enterprise AI assistants used by major financial institutions. The AI Safety Institute — now operating across multiple countries — has catalogued dozens of real-world AI security incidents. Singapore's CSA has explicitly called out AI as an emerging threat vector in its Singapore Cyber Landscape reports.
What Singapore's Model AI Governance Framework Actually Requires
The MAIGF is structured around two core principles: decisions made by AI must be explainable and AI deployment must be human-centric. It maps these to four key areas:
| MAIGF Area | What It Covers | Security Implication |
|---|---|---|
| Internal Governance | AI ownership, risk appetite, oversight structures | AI system inventory, owner accountability, risk classification |
| Human Involvement | Human-in-the-loop for high-stakes decisions | Controls on autonomous AI actions; agentic AI containment |
| Operations Management | Data quality, model performance, minimising bias | Training data security, model drift monitoring, input validation |
| Stakeholder Interaction | Transparency to customers, third-party AI use | Third-party AI risk assessment, vendor due diligence, disclosure |
For MAS-regulated entities, the MAIGF intersects with MAS Technology Risk Management (TRM) Guidelines — particularly around algorithm risk management (Section 9.2), model risk governance, and third-party technology risk. Institutions must document their AI models, assess their risk, and maintain oversight processes. Non-compliance is not a theoretical concern: MAS has taken supervisory action against institutions with inadequate model risk governance.
For Cyber Trust Mark applicants, AI systems that process personal data or make security-relevant decisions must be included in your asset inventory and risk assessment scope. The CTM's control domains around access management, data protection, and incident response all apply to AI deployments.
A Practical AI Security Governance Framework for Singapore Enterprises
The following framework is designed to be actionable — controls you can implement, not aspirational statements. It is structured across five domains:
1. AI Asset Inventory and Risk Classification
You cannot govern what you have not catalogued. Many organisations discover they have dozens of unofficial AI deployments — teams using ChatGPT for customer emails, developers using GitHub Copilot, operations staff uploading sensitive data to AI summarisers. Shadow AI is the first governance failure to address.
- Inventory requirement: Maintain a register of all AI systems — including SaaS tools with AI features, API-connected LLMs, and internal models. Include vendor name, data processed, business function, and owner.
- Risk classification: Classify each AI system by the sensitivity of data it touches (PDPA-relevant, MAS-regulated, internal-only) and the severity of decisions it influences (autonomous actions vs. human-reviewed recommendations).
- High-risk threshold: Any AI system that can take autonomous actions (send communications, execute transactions, modify data) or processes sensitive personal data should be treated as high-risk and subject to enhanced controls.
2. Prompt Security and Input Validation
Prompt injection is to AI what SQL injection was to databases in the 2000s: a fundamental input validation failure with serious consequences. Every enterprise AI deployment that accepts user input — or processes external content like documents, emails, or web pages — is potentially vulnerable.
- Input sanitisation: Validate and sanitise inputs before passing them to AI models. This is especially critical for agentic AI systems that use the output of one AI call as input to another.
- System prompt protection: Never expose your system prompt to users. Treat it as sensitive configuration. Use prompt injection detection tools — purpose-built classifiers like PromptDome's Shield Engine can detect injection attempts with high accuracy before they reach your model.
- Contextual separation: Maintain clear separation between trusted instructions (system prompts) and untrusted inputs (user data, external documents). Architecturally, this means structured message formatting and avoiding string concatenation patterns that blur the boundary.
- Output monitoring: Monitor AI outputs for anomalies — unexpected data disclosure, off-topic content, signs that the model has been instructed to ignore its original purpose.
3. Data Governance for AI
AI systems are data-hungry. Effective AI governance requires applying your existing data governance policies — classification, retention, access controls, residency — to AI contexts where they were never originally designed to operate.
- Data minimisation for AI: Only provide AI systems with the data they need to perform their function. Don't feed your entire CRM into an AI assistant if it only needs to answer FAQs. PDPA's data minimisation principle applies directly.
- Data residency: If using cloud-based LLM APIs, understand where your data is processed and stored. Singapore's PDPA cross-border transfer rules apply if personal data leaves Singapore. Most major LLM providers (OpenAI, Anthropic, Google) offer data residency options for enterprise accounts — require them.
- Training data security: If fine-tuning models on your data, apply the same access controls to training datasets as to the original data sources. A training dataset containing customer PII is itself a sensitive asset.
- Right to erasure: PDPA grants individuals the right to request data deletion. Establish processes for identifying and removing personal data from AI training datasets and, where technically feasible, from model weights.
4. Access Control and Agentic AI Containment
Agentic AI — systems that can autonomously call APIs, browse the web, write and execute code, or send communications — dramatically expands the attack surface. An AI agent compromised by prompt injection and given broad tool access can cause significant damage before any human notices.
- Least-privilege for AI agents: Apply the same least-privilege principle to AI agents as to human users. An AI agent that drafts emails does not need access to your financial systems. Scope tool permissions to the minimum required.
- Human-in-the-loop for high-risk actions: Require human approval before AI agents execute irreversible or high-value actions — sending external communications, modifying production data, executing financial transactions. This is both a security control and a MAIGF requirement.
- Session isolation: Treat each AI agent session as potentially compromised. Avoid persistent agent sessions with accumulated permissions. Use ephemeral contexts where possible.
- Audit logging: Log all AI agent actions with sufficient detail to reconstruct what the agent did, what tools it called, and what data it accessed. This is essential for both incident response and regulatory compliance under MAS TRM.
5. Third-Party AI Risk Management
Most AI deployments involve at least one third party — the LLM provider, a SaaS vendor with embedded AI features, or a systems integrator who built the solution. Your third-party risk management process must extend to cover these relationships.
- AI-specific due diligence: Extend your vendor questionnaires to cover AI-specific topics: model training data sources, security testing methodology, incident disclosure commitments, and data handling practices.
- Contractual protections: Ensure contracts with AI vendors include: data processing agreements compliant with PDPA, explicit data residency commitments, security incident notification obligations, and rights to audit.
- Model provenance: For high-risk applications, understand which underlying model your vendor is using. Open-source models fine-tuned on unknown data carry different risks than commercial models with published safety evaluations.
- Concentration risk: Many Singapore enterprises are building critical workflows on a single LLM provider. Assess your dependency and consider fallback options.
MAS-Specific Considerations for AI Governance
Singapore's financial sector faces additional AI governance requirements through MAS's regulatory framework. MAS Notice 655 (Cyber Hygiene), the TRM Guidelines, and various MAS Notices on technology governance collectively create obligations that AI deployments must satisfy.
Key MAS-specific requirements for AI governance include:
- Algorithm risk management (TRM 9.2): FIs must assess the risk of algorithms used in business processes, including AI/ML models. This requires model documentation, validation, and ongoing monitoring — not just pre-deployment testing.
- System resilience: AI systems used in critical business functions must meet MAS availability requirements. This includes AI-powered fraud detection, credit decisioning, and customer service automation.
- Outsourcing rules: Using cloud-based LLM APIs for material business functions may trigger MAS outsourcing notification requirements. Review MAS Guidelines on Outsourcing (2016) and the cloud guidelines.
- Model explainability for regulated decisions: AI models making credit decisions, fraud flags, or AML alerts must be explainable to regulators and, in some cases, to customers. Black-box models without interpretability mechanisms create supervisory risk.
If your institution is subject to MAS oversight, we recommend a dedicated AI governance review as part of your annual MAS TRM gap assessment.
AI Governance and the Cyber Trust Mark
The Cyber Trust Mark (CTM) — Singapore's premier cybersecurity certification for enterprises — is technology-neutral in its control requirements but technology-inclusive in their application. AI systems must be included in your CTM scope.
Specific CTM control domains with direct AI governance implications:
- Asset Management: AI systems (models, training datasets, AI APIs) must be inventoried and classified. Shadow AI is a CTM compliance gap.
- Access Control: Access to AI systems, training data, and model management interfaces must be controlled and audited. AI agent tool access must be governed.
- Supplier Relationship Management: LLM API providers and AI SaaS vendors must be included in your supplier risk management programme.
- Incident Management: AI security incidents — prompt injection, model outputs exposing sensitive data, AI-assisted social engineering attacks — must be covered by your incident response plan.
- Security Testing: VAPT scope should include AI system security testing — prompt injection testing, access control validation, and data exposure assessment for AI interfaces.
Five Common AI Governance Mistakes Singapore Enterprises Make
- Treating AI governance as a policy exercise: Producing an AI governance policy without implementing technical controls is compliance theatre. The policy must be backed by access controls, monitoring, and testing.
- Excluding AI from VAPT scope: Most penetration testing engagements do not include AI system security testing. Prompt injection, data exfiltration through LLMs, and agentic AI abuse are not tested by traditional VAPT methodologies. Require AI-specific testing from your VAPT provider.
- Giving AI agents excessive permissions: The fastest path to an AI security incident is deploying an AI agent with broad API access and no human oversight on its actions. Least-privilege applies to AI.
- Ignoring data residency for cloud LLMs: Sending Singapore customer personal data to offshore LLM APIs without proper data processing agreements and PDPA cross-border transfer compliance is a regulatory gap many organisations have not addressed.
- No AI incident response playbook: When a prompt injection attack causes your AI system to exfiltrate customer data or send unauthorised communications, who responds? Most organisations have not extended their incident response plans to cover AI-specific scenarios.
Getting Started: A 90-Day AI Governance Roadmap
For organisations starting their AI governance journey, we recommend this phased approach:
- Days 1–30 — Discover: Conduct an AI asset discovery exercise. Survey all business units for AI tool usage — including approved and shadow deployments. Build your AI system inventory. Classify each system by risk. Identify the top three highest-risk AI deployments for immediate attention.
- Days 31–60 — Assess: For high-risk AI systems, conduct a security assessment covering: access controls, data flows, prompt injection exposure, third-party risk posture, and alignment with MAIGF/MAS TRM requirements. Identify gaps and prioritise remediation.
- Days 61–90 — Govern: Implement foundational controls: AI system ownership assignments, data governance policies for AI, vendor due diligence updates, and prompt injection detection for customer-facing AI interfaces. Establish an AI steering committee or assign AI governance accountability to an existing governance body. Draft your AI incident response playbook.
After the initial 90 days, AI governance becomes an ongoing programme — quarterly risk reviews, annual security testing, and continuous monitoring of AI system behaviours.
Conclusion: AI Governance Is Security Governance
The organisations that will avoid AI security incidents in 2026 and beyond are not those with the most sophisticated AI systems — they are those that treat AI governance as an extension of their existing security governance, not a separate exercise.
Singapore's regulatory environment — PDPA, MAS TRM, Cyber Trust Mark, the MAIGF — provides a strong framework. The gap is in execution: translating policy requirements into technical controls, extending existing security programmes to cover AI attack surfaces, and building the operational capability to detect and respond to AI-specific incidents.
Infinite Cybersecurity provides AI governance assessments, AI security testing, and advisory services tailored to Singapore's regulatory environment. If you are deploying AI in a regulated context or applying for the Cyber Trust Mark, contact our team to discuss your AI governance posture.
Assess Your AI Security Governance Posture
Get a structured assessment of your AI deployments against Singapore's regulatory requirements — MAIGF, MAS TRM, PDPA, and Cyber Trust Mark. Identify gaps before they become incidents.
Talk to Our Team