Most AI governance today is reactive. Something happens — data is sent to a model, an agent takes an action, an AI-driven decision is executed — and then someone reviews it. Maybe. Eventually. If the logging was good enough to reconstruct what happened.
For low-risk AI interactions, that is often acceptable. An employee asking ChatGPT to rephrase a paragraph of internal text does not need pre-approval. The risk is low and the cost of governance friction outweighs the benefit.
But for high-risk AI actions — sending client data to an external model, executing a financial decision based on AI output, allowing an AI agent to modify production records — reactive governance is not governance at all. It is post-incident review.
This article explains why pre-execution approval gates are the critical missing piece in most AI governance programmes and how to implement them without creating bottlenecks.
What makes an AI action “high-risk”
Not every AI interaction requires the same level of oversight. A practical risk framework classifies actions by their potential impact:
Low risk — autonomous execution. Read-only AI operations, internal content drafting, data summarisation from non-sensitive sources. These can run without approval, logged for audit purposes.
Medium risk — monitored execution. AI operations involving internal data, code generation, or content that will be reviewed before external use. These run with enhanced logging and periodic review, but not per-interaction approval.
High risk — approval required. AI operations that involve any of:
- Personal data or client-confidential data sent to external models
- AI-driven decisions that have financial, legal, or operational consequences
- Agent actions that modify production data, send external communications, or execute transactions
- Interactions with models or tools outside the organisation’s approved list
Critical risk — blocked without exception. AI operations that cannot be mitigated to acceptable risk under any circumstances — for example, processing data covered by specific contractual prohibitions or executing autonomous actions in safety-critical systems.
The specific boundaries between tiers will vary by organisation, industry, and regulatory environment. The point is to have boundaries at all, and to enforce them technically rather than relying on employee judgment alone.
Why post-hoc review is not enough
Post-hoc review — looking at logs after the fact — is valuable for compliance monitoring and incident investigation. It is not sufficient for high-risk governance, for three reasons:
The damage is already done. If client data was sent to an unapproved model, reviewing the log does not un-send it. If an AI agent deleted production records, the audit trail does not undelete them. For actions with irreversible consequences, the only effective governance happens before execution.
Detection is unreliable. Post-hoc review depends on someone noticing the problem. In a high-volume AI environment, the signal-to-noise ratio is low. A single problematic interaction in thousands of daily AI requests is easy to miss — especially if the logging does not surface the risk clearly.
It does not satisfy regulatory requirements. GDPR Article 22, the EU AI Act’s high-risk system provisions, and sector-specific regulations (MaRisk, DORA, SOX) expect controls that prevent harmful outcomes, not just detect them after the fact. An auditor asking “what controls prevent your AI from making unsupervised decisions with customer data?” is not satisfied by “we review logs weekly.”
How pre-execution approval gates work
A pre-execution approval gate intercepts an AI action before it executes, presents it to a reviewer, and proceeds or blocks based on the reviewer’s decision. The flow looks like this:
1. The AI action is initiated. An employee submits a prompt, an agent proposes a tool call, or an AI-driven workflow reaches a decision point.
2. The policy engine evaluates the action. The governance layer classifies the action based on the data involved, the operation type, and the policy rules configured for this context. If the action falls below the approval threshold, it proceeds automatically with logging.
3. High-risk actions are paused. If the action exceeds the approval threshold, execution is suspended. The action is queued for review. The AI session waits for a decision.
4. The reviewer sees a redacted view. The approval request is presented to the designated reviewer — a manager, compliance officer, or security lead — in a form that shows what is being requested without exposing sensitive data unnecessarily. The reviewer sees the action type, data categories involved, the policy rule that triggered the gate, and enough context to make an informed decision.
5. The reviewer decides. Approve (proceed with execution), reject (block the action), or modify (adjust the action before proceeding). The decision is logged with the reviewer’s identity, timestamp, and reasoning.
6. Execution resumes or stops. If approved, the action executes. If rejected, the session is notified and can handle the rejection gracefully. In both cases, the audit trail records the complete sequence: action proposed → policy evaluation → approval requested → decision made → outcome.
Designing approval workflows that do not create bottlenecks
The most common objection to pre-execution approval is that it slows things down. And it does — that is the point. For high-risk actions, a brief pause is the cost of governance. But poor implementation can turn a necessary pause into an unworkable bottleneck.
Here is how to avoid that.
Set approval thresholds carefully. If every AI interaction requires approval, the workflow is unusable. If nothing requires approval, the governance is performative. Use the risk classification framework above: only high-risk and critical-risk actions trigger the gate. Low-risk and medium-risk actions proceed with logging.
Route to the right reviewer. Not every approval needs to go to the CISO. Route based on context: data classification approvals go to a data protection lead, financial decisions go to a finance manager, client data actions go to the engagement partner. Match the reviewer to the risk domain.
Set response time expectations. Define SLAs for approval response. A 15-minute SLA for most actions, escalation after 30 minutes, and auto-block after 60 minutes gives reviewers a clear window without leaving actions queued indefinitely.
Make the approval interface simple. The reviewer should be able to understand the request and make a decision in under two minutes. If the approval interface requires reading raw data, consulting documentation, or understanding technical implementation details, the workflow will fail.
Allow delegation. Reviewers need to delegate when they are unavailable. Build delegation chains so approvals are not blocked by a single person’s calendar.
Approval workflows for AI agents
AI agents introduce a specific variant of this challenge. Agents make decisions and take actions autonomously — that is their purpose. But autonomous action without governance is a risk most organisations are not prepared for.
For agent workflows, approval gates apply at the tool-call level:
- Read-only tool calls (retrieve, summarise, analyse) can typically execute autonomously
- Write-capable tool calls (send, create, update, delete) should trigger approval for any action that affects production data, external communications, or financial systems
- Multi-step workflows may need approval at specific checkpoints rather than for every individual action — review the plan, approve the sequence, monitor execution
The key principle: the level of autonomy an agent has should be proportional to the level of trust it has earned through prior run history and the level of risk the action carries. New agents with unproven behaviour should have tighter approval requirements than established agents with a track record.
What this means for your organisation
If your organisation uses AI in workflows that process sensitive data or drive consequential decisions, you need to answer one question: what happens between the AI deciding to act and the action executing?
If the answer is “nothing — the action just executes,” you have a governance gap that no amount of post-hoc logging can close.
Pre-execution approval gates close that gap. They add a moment of human judgment at the point where it matters most — before the irreversible happens. Done well, they add proportionate friction to high-risk actions without slowing down everyday use.
Qadar provides pre-execution approval gates for AI interactions and agent workflows — built into Shield Control, configured in minutes. See how it works