What You Need to Know from Gartner’s 2025 TRiSM Report on AI Risk and Governance

calendar08/05/2025
clock7 min read
feature image

AI is no longer a technology companies are just experimenting with. Generative AI (GenAI) is now embedded in real workflows, customer experiences, and business decisions. With this new power, however, comes some serious responsibility to ensure a safe and trustworthy experience. Enter TRiSM, short for Trust, Risk, and Security Management.  

Gartner defines TRiSM in its Market Guide for AI Trust, Risk, and Security Management as a framework to help organizations build and run AI systems that aren’t just innovative, but safe, ethical, and resilient, too. And, if you’re using any form of AI – from an off-the-shelf chatbot to a custom-built large language model (LLM) – TRiSM matters to you and your organization. 

What is TRiSM, and Why Does It Matter?

TRiSM is Gartner’s holistic framework for managing the entire lifecycle of AI models and agents, ensuring they’re used responsibly, securely, and with proper oversight. TRiSM enables organizations to build and scale AI initiatives from policy creation to runtime monitoring without compromising data integrity, security, or user trust.  

The 2025 report highlights the growing complexity of GenAI risk and the urgent need for real-time enforcement, cross-functional coordination, and purpose-built tools that can scale the right kinds of effective controls that keep up with organizational demand for GenAI. Read on as we distill the report’s most important findings into an actionable TL; DR, complete with guidance on where to begin. 

Key Findings from the TRiSM Report

Gartner’s latest research outlines a shifting risk landscape as GenAI adoption accelerates: 

  • Data compromise, third-party risks, and inaccurate or harmful outputs top the list of AI-related concerns. These risks cut across both commercial application programming interface (APIs) and homegrown models.
  • Internal data oversharing is now a greater threat than external or malicious actors. Employees unintentionally entering confidential or regulated data into GenAI systems pose a rising security risk.
  • TRiSM is essential across all AI types, not just internally built LLMs. It also applies to API-connected models, embedded AI features, and increasingly autonomous agentic systems.
  • Demand for GenAI-specific TRiSM tools is growing sharply. This demand pushes vendors to expand their capabilities in areas such as runtime enforcement and prompt auditing.
  • Despite growing awareness, many organizations remain stuck in the “governance on paper” phase. They often lack the operational controls needed for real-world protection. 

What’s New in TRiSM for 2025

While many core TRiSM principles remain consistent, Gartner’s 2025 report reflects a noticeable shift in urgency, focus, and scope, particularly in light of how quickly GenAI usage has expanded over the past year. Here’s what’s changed: 

  • Runtime enforcement is no longer optional. In 2024, many organizations were focused on policies and development-time controls. The 2025 report stresses that without real-time monitoring and automated guardrails, those policies have little impact in production.
  • Agent-based AI systems are now on the radar. This year’s report highlights the unique risks tied to autonomous and agentic AI — systems that make decisions, take actions, or persist across sessions. Managing and observing these systems requires new layers of inspection and accountability.
  • The policy-to-practice gap is growing. While more enterprises have formal AI governance strategies in place, very few have successfully operationalized them. Gartner calls out this growing disconnect and warns that frameworks without execution leave organizations exposed.
  • Vendor innovation is accelerating. Unlike 2024, when TRiSM tooling was more conceptual or fragmented, we’re now seeing a surge of commercial products purpose-built for AI risk management. This includes prompt auditing, LLM firewalls, and AI observability platforms. 

In short, Gartner’s 2025 TRiSM update pushes organizations to move from planning to implementation, and to treat AI systems like the evolving, high-risk assets they are. 

Why Traditional Controls Aren’t Enough for GenAI

While most organizations have solid foundations in identity management, data protection, and security operations, GenAI introduces an entirely new layer of complexity. The same controls that worked for cloud apps or structured data aren’t equipped to handle non-deterministic, opaque, and often external AI systems.

Here’s why existing toolsets fall short: 

  • Gen AI outputs are unpredictable. Even well-trained LLMs can produce hallucinated content, biased responses, or inadvertently surface sensitive information from their training data.
  • APIs to third-party models like OpenAI, Anthropic, or Gemini extend your attack surface and your responsibility. The moment users input sensitive content into a prompt, data privacy and compliance risks spike.
  • Traditional data ownership protocol (DOP) and identity and access management (IAM) tools weren’t built for context-aware AI usage. They can’t interpret prompt content, track retrieval-augmented generation (RAG) queries, or determine whether an output violates policy.
  • Governance doesn’t stop at deployment. Organizations need both pre-deployment checks and runtime inspection to catch inappropriate usage, drift, or abuse as it happens. 

The Four Core Pillars of TRiSM

Gartner outlines four essential pillars of operationalizing TRiSM. These pillars form a tactical blueprint for organizations seeking to secure and scale AI responsibly.  

1. AI Governance

AI governance starts with visibility. Organizations must maintain a clear and current picture of where and how AI is used. 

  • Create and maintain an inventory of all AI assets — models, APIs, plugins, and agents.
  • Document use cases, inputs, outputs, and known risks associated with each deployment.
  • Implement risk scoring and policy-based approvals for higher-risk scenarios.
  • Ensure auditability and traceability from model input to output, especially for regulated workflows. 

This foundational layer enables risk-informed decisions and ensures oversight is embedded from day one. 

2. Runtime Inspection & Enforcement

Governance doesn’t end once AI systems go live. Production environments introduce new dynamics that require continuous oversight. 

  • Monitor AI activity in real time, including prompt and output behavior.
  • Set up automated controls like redaction, prompt blocking, and anomaly detection.
  • Identify drift or misuse quickly, especially as models evolve or are repurposed.
  • Enable alerting and escalation for violations, abuse, or policy exceptions. 

This layer provides the operational safeguards needed to maintain trust while AI systems are actively used. 

3. Information Governance

AI systems are only as trustworthy as the data they interact with. Information governance ensures sensitive content is identified, protected, and used appropriately throughout the AI lifecycle. 

This layer proactively addresses privacy, regulatory, and intellectual property risks. 

4. Infrastructure and Stack Controls

The technical foundation of AI systems – models, APIs, and compute environments – must be secured just like any other critical business infrastructure. 

  • Protect API keys, model weights, and plugin configurations from unauthorized access.
  • Use container security, confidential computing, and runtime isolation to harden execution environments.
  • Apply zero-trust principles to all AI-related infrastructure, whether on-premises, hybrid, or cloud-native.
  • Ensure the entire stack supports traceability, telemetry, and audit readiness. 

This layer reduces the attack surface and supports secure AI operations at scale. 

Getting Started with TRiSM

The TRiSM framework may sound complex, but getting started doesn’t have to be. For most organizations, the right first step is understanding where AI is already being used, both formally and informally, and how that use intersects with existing data governance and security practices. 

Here’s a practical path to build momentum: 

  1. Begin with discovery. Map out where GenAI tools, APIs, or agents are currently in use across business units or departments (whether officially sanctioned or not).
  2. Prioritize your riskiest use cases. Focus initial efforts on AI-powered workflows that handle sensitive data, generate customer-facing content, or automate decision-making.
  3. Connect cross-functional teams. TRiSM isn’t a solo project. Align stakeholders from security, IT, legal, compliance, and operations to ensure policies are enforceable and not just theoretical.
  4. Choose solutions that layer into your existing stack. Look for modular, standards-based, and flexible tools to work alongside your current governance and security infrastructure. 

Treat TRiSM as an extension – not a replacement – of your broader risk and compliance strategy. Many of the controls are familiar; they just need to be adapted to fit a faster, more ambiguous, and more fluid AI landscape. 

Building Trustworthy AI Requires Action Now

The message from Gartner is clear: GenAI introduces new types of risk at a scale and speed that organizations cannot afford to ignore. As AI becomes more embedded in everyday workflows, trust and security must be built into the foundation, not added later.

That process starts with visibility, accountability, and policy, but it doesn’t end there. Governance that lives only in documents or policy decks won’t hold up in production environments. Real trust requires enforcement: runtime monitoring, contextual data controls, and infrastructure-level protections.

Organizations that act today by establishing clear guardrails, assigning ownership, and enforcing AI policies precisely will be better positioned to move fast, scale responsibly, and earn lasting stakeholder trust. 

author

Timothy Boettcher

Timothy Boettcher is a recognized leader in digital transformation, workplace modernization, and AI-driven productivity. A Microsoft MVP for M365 Copilot, he specializes in information management, data governance, and automation, helping organizations tame the ‘Wild West’ of unstructured data and maximize the value of their collaborative automation platforms. With over 20 years of experience across Australia, Singapore, Japan, and the U.S., Timothy has led modernization initiatives in both the Public and Private Sectors. He shares his expertise through weekly training videos, published articles, and speaking engagements at KMWorld, ARMA NOVA, AIIM, 365EduCon, and more. A dynamic speaker, Timothy delivers actionable insights on Microsoft Copilot, SharePoint, Teams, and Microsoft 365, helping organizations leverage AI and cloud collaboration to drive efficiency, enhance governance, and maximize ROI.
 
Connect with me here: https://timothyb.com.au/