If you have developers shipping AI applications, you have probably heard of MCP — the Model Context Protocol. You may not yet have thought through what it means for your security posture.
This post explains MCP in plain language, describes the risks it introduces, and outlines what governance looks like for teams that want to use it safely.
What MCP is, in one paragraph
MCP is an open protocol that lets AI models call external tools. Instead of hard-coding tool integrations into each AI application, developers expose capabilities through MCP servers — standardised endpoints that a compatible AI client can discover and call. A single MCP server might expose tools to read a file system, query a database, send an email, or call a third-party API. The AI model asks; the MCP server acts.
This is genuinely useful. It makes AI applications more composable and easier to build. It also dramatically expands the action surface that is now governed by whatever the AI model decides to do.
Why MCP introduces security risk
The risk is not in the protocol itself. MCP is well-specified and designed with developer ergonomics in mind. The risk is in how it is deployed — and what governance, if any, exists between the model and the tools it can call.
Three specific risks:
1. Unscoped tool access
MCP servers expose tools. An AI client that connects to an MCP server can, by default, call any tool that server exposes. If the server exposes 40 tools — including some that modify records, send messages, or access sensitive data — the connected AI client has access to all 40.
Most AI deployments do not scope which tools a specific agent or user session is allowed to call. They deploy the server, connect the client, and assume the model will use good judgement. Good judgement is not an access control policy.
The governance requirement: Tool-level access control. Each agent or session should only be able to call the specific tools it needs — not every tool exposed by every registered server.
2. Long-lived direct client trust
Many MCP deployments grant clients long-lived trust. Once a client authenticates to an MCP server, it stays authenticated for the session — and sometimes well beyond it. If that client is compromised, or if the AI model is manipulated into making calls it should not, long-lived trust means the blast radius is large.
The safer architecture issues short-lived session tokens that expire at the end of the agent task. No long-lived direct trust. No standing access that can be exploited after the fact.
The governance requirement: Session-scoped credentials. MCP access should be provisioned for the duration of one agent task, not for the lifetime of a connection.
3. No audit trail for tool calls
When a user sends a message to an AI model, you might log the conversation. When the AI model calls an MCP tool, the call, its arguments, and its outcome are often not logged in any structured, auditable way.
This matters in practice. If an AI agent modifies a record, sends a message, or retrieves sensitive data via an MCP tool call — and you do not have a structured log of that action — you cannot reconstruct what happened during an audit. You cannot answer the question “what did this agent access and when?”
The governance requirement: Structured, trace-linked audit records for every MCP tool call — including the tool name, arguments fingerprint, the agent that initiated it, the session context, and whether human approval was obtained.
What MCP governance looks like in practice
For teams deploying MCP-connected AI applications, governance has three layers:
Layer 1: Server registration and tool inventory
Every MCP server used in your AI environment should be registered. This registration should include:
- Which MCP server is approved for use
- Which tools that server exposes
- Which agents or roles are permitted to call which tools
This is a standard concept in software operations — application registration, service allowlisting, API inventory — applied to MCP.
Layer 2: Session-scoped access provisioning
When an agent session starts, it receives a scoped credential that permits access to the specific tools it needs for that task. When the session ends, the credential expires. No standing access. No long-lived client trust.
This is the difference between giving a contractor a building badge that works on specific days for the duration of the project, versus giving them a permanent access card.
Layer 3: Pre-execution approval for high-risk calls
Some tool calls should not execute without human review. Sending an external message, modifying a production record, deleting data — these are calls that warrant a pause-and-confirm step before execution.
A governance layer that can intercept an MCP tool call, present it to a reviewer in redacted form, and either proceed or abort based on that decision is the equivalent of a change management approval workflow — applied at the AI runtime layer.
Questions to ask your AI platform team
If your organisation is using or planning to use MCP-connected AI applications, these are the governance questions worth asking:
- What MCP servers are registered in our environment? Can you produce a complete list?
- What tools does each server expose? Have you inventoried them?
- Which agents or users can call which tools? Is this access-controlled, or is it open to any connected client?
- Are tool calls logged? Can you produce an audit record for any given tool invocation?
- Are high-risk tool calls gated on human approval? Or do they execute automatically?
- How are client credentials scoped? Are they session-scoped and expiring, or long-lived?
If the answers are not readily available, your MCP deployment is running without a governance layer.
Why this matters now
MCP adoption is accelerating. Major AI platforms support it. Developers are building against it. The tooling ecosystem is expanding rapidly.
The governance gap is not hypothetical — it is the standard state of most early MCP deployments, for the same reason that the governance gap around LLM usage in general is standard: the developers moved fast, the security tooling is catching up, and the buyer conversation has not yet caught up with the technical reality.
For regulated industries — financial services, legal, healthcare — the question is not whether MCP governance is required. It is whether you get ahead of it before an audit or incident forces the conversation.
How Qadar governs MCP
Qadar’s Shield Control includes MCP governance capabilities:
- Server registration — define which MCP servers are approved for use in your environment
- Tool inventory — enumerate the tools each server exposes and the permitted callers
- Session-scoped access — issue short-lived runtime sessions per agent task; no long-lived direct client trust
- Pre-execution approval gates — intercept high-risk tool calls before execution and route to a human reviewer
- Trace-linked audit trail — every MCP tool call is logged with actor, session context, arguments fingerprint, and approval reference
Want to see what a governed MCP deployment looks like? Book a walkthrough