Most organisations that write AI usage policies end up with one of two documents: a two-page statement of intent that nobody reads, or a 40-page compliance artefact that nobody follows.
Neither achieves anything. The employees who would benefit from guidance don’t have it. The compliance team can’t point to evidence of enforcement. And the organisation remains exposed.
This guide is for operations leaders, GCs, and CISOs who want to build something more useful — a policy that is proportionate, enforceable, and actually shapes how people work.
Why most AI policies fail
Before writing a new policy, it’s worth understanding why most existing ones don’t work.
They’re written around the worst case. Policies drafted by legal and compliance teams often try to anticipate every possible misuse scenario. The result is language so cautious that it prohibits things employees need to do. When enforcement is impossible and the rules seem unreasonable, people ignore them entirely — including the reasonable parts.
They treat all AI use as equivalent. Using a consumer chatbot to draft an internal meeting agenda carries a completely different risk profile than using an AI agent to process customer data. Policies that apply uniform restrictions to both either over-restrict low-risk use or under-restrict high-risk use.
Enforcement is an afterthought. A policy is only as good as its enforcement mechanism. Policies that rely on employees self-certifying compliance, or on occasional spot checks, do not produce consistent behaviour. The gap between what the policy says and what people actually do becomes a permanent feature of your risk posture.
They’re outdated before they’re published. AI capability is moving faster than annual policy review cycles. A policy written around GPT-4 and Copilot may not address the AI agents, multimodal models, or specialised business tools that arrive six months later.
The four components of a policy that works
1. A clear scope definition
Start by answering: what does “AI” mean in this policy? The term is too broad to be useful on its own.
Be specific. Your policy should cover:
- Frontier models accessed through consumer interfaces (ChatGPT, Claude.ai, Gemini, Copilot)
- API-based model access from internal scripts, tools, and automation
- AI-enabled SaaS — third-party tools with AI features that process company data
- AI agents — autonomous or semi-autonomous workflows that use AI to take actions
Define what is in scope and what is explicitly out of scope. Internal models deployed on your own infrastructure with appropriate controls may deserve different treatment than external API calls.
2. A tiered risk framework
Not all AI use carries the same risk. A tiered framework maps use cases to controls without applying maximum restriction everywhere.
Tier 1 — Open use: AI for personal productivity, drafting, research, or internal communication where no regulated or confidential data is involved. Permitted with standard acceptable use obligations (no sensitive data, output review before external use).
Tier 2 — Controlled use: AI for tasks involving customer data, internal strategy, financial information, or regulated content. Permitted with specific controls: approved models or platforms only, output review required, logging enabled.
Tier 3 — Restricted use: AI for high-stakes decision-making, regulated outputs (legal documents, financial advice, medical information), or autonomous agent actions. Permitted only with explicit sign-off, enhanced monitoring, and documented accountability.
Tier 4 — Prohibited: AI use that cannot be mitigated to acceptable risk — for example, using consumer AI to process data covered by specific contractual prohibitions, or deploying agents with access to production systems without human-in-the-loop controls.
This framework lets you say yes to most things (which drives adoption of the governed path) while applying appropriate friction to the use cases that actually carry material risk.
3. A catalogue of approved tools
Your policy needs a positive list, not just a negative one. Employees adopt shadow AI when they can’t find a faster approved alternative. Maintain and publish a list of vetted AI tools and models your teams can use for common use cases.
Include:
- The tool name and approved version/tier
- What it’s approved for
- What it’s not approved for
- Data handling classification (what categories of data can be processed)
- Whether it is covered by a DPA or enterprise agreement
Review this list quarterly. When a team asks about a new tool, run it through your vetting process within two weeks — not two quarters. Slow vetting is a primary driver of shadow AI.
4. Enforcement that operates automatically
This is where most policies break down. Manual compliance checks and self-certification do not produce consistent behaviour. Policies that want to control AI use need enforcement mechanisms that operate at the point of use, not after the fact.
Effective enforcement for AI policy looks like:
- Technical controls at the prompt layer using Shield Web. Rules that filter or block categories of sensitive data before they reach external models. These operate regardless of employee intent and don’t require anyone to remember the policy.
- Access controls for AI agents. Specifying programmatically what models an agent workflow can invoke, what data it can access, and what actions it can take. Not a document — a technical permission model.
- Logging and audit. A complete record of AI interactions that lets you verify policy compliance, respond to incidents, and satisfy regulatory audit requirements.
- Approved tooling that makes compliance easy. When the secure path is also the convenient path, adoption follows.
Common pitfalls
Writing the policy before mapping actual use. Most organisations don’t know what AI their employees are actually using before they write a policy. The result is a policy that addresses hypothetical risk while missing actual exposure. Spend two weeks on discovery before you start drafting.
Treating the policy as a one-time project. AI governance is an ongoing programme. Assign an owner. Set a review cadence. Track exceptions and use them to update the framework.
Leaving agents out. Many policies still focus exclusively on interactive AI (chatbots, copilots) and say nothing about AI agents. This is an increasingly significant gap. Agents that operate autonomously, access data, and take actions need explicit governance.
Making it impossible to ask for help. Employees encounter situations the policy doesn’t cover. If asking compliance or security for guidance feels punitive or slow, they’ll make a decision on their own — usually the wrong one. A clear, fast escalation path makes the policy more useful, not less.
A note on the enforcement gap
The honest reality is that most organisations’ AI policies are aspirational. They describe what should happen, not what does happen. That gap — between written policy and actual behaviour — is the source of most AI-related compliance risk.
Closing it requires technical enforcement through Shield Control, not just better communication. This is not a criticism of policy teams. It reflects the nature of AI use: it is fast, distributed, and invisible to most monitoring tools. The only durable enforcement happens at the infrastructure layer.
Qadar enforces your AI policy automatically — without slowing your team down. See how it works