How to build an approved AI tools list without a dedicated security team

Shadow AI grows when employees can't find approved alternatives. Here's how lean teams can build and maintain an AI allowlist that actually reduces risk — without a full security function.

  • approved AI tools
  • AI allowlisting
  • shadow AI
  • AI governance
  • lean teams

Shadow AI does not start with malicious intent. It starts with a gap: employees need AI tools to do their work, IT has not approved anything, and the path of least resistance is a personal ChatGPT account.

The most effective countermeasure is not a ban. It is an approved alternatives list — a short, maintained catalogue of AI tools your team can use, with clear guidance on what each tool is approved for and what data it can process.

This sounds simple. In practice, most organisations either skip it entirely or produce a list that is so restrictive or outdated that employees ignore it within weeks.

Here is how to build one that works, even without a dedicated AI security function.

Why a tools list matters more than a policy document

An AI usage policy tells people what they should not do. An approved tools list tells them what they can do. The second is more useful for three reasons:

1. It reduces decision fatigue. An employee who wants to use AI to summarise meeting notes should not have to interpret a 20-page policy to decide whether it is permitted. A tools list that says “use [approved tool] for internal content drafting — no client data” gives them a clear, immediate answer.

2. It channels behaviour toward governed paths. Every tool on your approved list should be one where you have data processing agreements, access controls, and logging in place. When employees use those tools instead of personal alternatives, your risk posture improves automatically.

3. It gives you a basis for enforcement. If you can say “these tools are approved, and everything else requires approval,” you have a clear line. Without that line, enforcement becomes arbitrary — and arbitrary enforcement destroys trust.

What belongs on the list

A practical approved AI tools list has four elements for each entry:

The tool and approved tier. Name the tool and, if applicable, the plan or tier your organisation has contracted. “ChatGPT Enterprise” is a different approval from “ChatGPT Free” — the data handling terms are different.

Approved use cases. Be specific. “Internal content drafting, code review, data analysis on non-confidential datasets” is useful. “General productivity” is not — it leaves too much room for interpretation.

Prohibited use cases. Equally specific. “Do not use for client-facing deliverables without review. Do not upload files containing personal data. Do not paste source code from [specific repositories].”

Data classification. What categories of data can this tool process? Internal-only data? Anonymised data? No personal data? No client data? This is the single most important field on the list and the one most organisations leave vague.

How to build the initial list

Step 1: Discover what is already in use

Before approving tools, find out what your team is actually using. There are two ways to do this:

Survey-based discovery. Ask team leads what AI tools their teams use and what they use them for. This is fast but incomplete — people underreport usage, especially when they suspect the answer might lead to a ban.

Technical discovery. Use browser-layer or network-layer monitoring to identify AI tool traffic. This gives you an accurate picture of which tools are accessed, how frequently, and by which teams. It also reveals tools you did not know existed in your environment.

The combination of both approaches gives you a realistic starting inventory.

Step 2: Evaluate each tool against three criteria

For each discovered tool, answer:

Does this tool have acceptable data handling terms? Review the provider’s terms of service, privacy policy, and — if available — data processing agreement. The key questions: does the provider train on your data? Where is data processed and stored? What retention policies apply? Can you get a DPA that satisfies your regulatory obligations?

Can we enforce appropriate access controls? Can you control who in your organisation uses this tool? Can you distinguish between corporate and personal accounts? Can you apply different policies to different user groups?

Can we log usage? For compliance and incident response, you need a record of AI interactions. Does the tool provide audit logs? If not, can your security infrastructure log interactions at the browser or network layer?

Tools that pass all three go on the approved list. Tools that fail one or more go on a “not yet approved” list with a clear explanation of what would need to change.

Step 3: Set a review cadence

AI tools change rapidly. New capabilities, new data handling terms, new providers. Set a quarterly review cycle for the approved list. Between reviews, maintain a request process: when an employee or team wants to use a tool that is not on the list, they can submit it for evaluation.

The review process should be fast. Two weeks from request to decision is a reasonable target. If your vetting process takes months, employees will not wait — they will use the tool anyway, and your list becomes irrelevant.

Enforcement without a security team

The approved list is only useful if it shapes behaviour. For lean teams without a dedicated security function, enforcement needs to be largely automated.

Browser-layer controls can enforce the approved list at the point of use. When an employee accesses an unapproved AI tool, the control layer can warn them, suggest an approved alternative, or block the interaction depending on your policy configuration.

Prompt inspection ensures that even approved tools are used within data handling boundaries. An employee using an approved tool to draft internal content is fine. The same employee pasting client financial data into the same tool should trigger a policy intervention.

Usage reporting gives operations leaders visibility into whether the approved list is working. If 60% of AI usage is still happening through unapproved tools, the list needs to be updated — either adding tools that employees actually need or improving the governed alternatives.

Common mistakes

Making the list too short. If your approved list has one tool and your team needs five different capabilities, four gaps remain for shadow AI to fill. Cover the use cases your team actually has.

Not explaining the “why.” Employees who understand why a tool is not approved (e.g., “their terms allow training on your data” or “no DPA available”) are more likely to comply than those who see an unexplained prohibition.

Letting the list go stale. A list from six months ago that does not include the tools teams adopted last quarter is worse than no list at all — it signals that the governance programme is not keeping up.

Treating all unapproved use as a violation. Sometimes employees discover tools that should be on the approved list. A request process that treats “I found something useful” as a positive signal, not a policy breach, keeps the channel open.

Getting started

If you do not have an approved AI tools list today, start with three steps:

  1. Run a discovery exercise — survey plus technical monitoring — to learn what is actually in use
  2. Evaluate the top five tools against data handling, access control, and logging criteria
  3. Publish the initial list with clear use case guidance and a request process for additions

You will not cover every tool or use case in the first iteration. That is fine. A maintained list that covers 80% of your team’s AI usage is dramatically better than no list at all.


Qadar discovers every AI tool your team uses through Shield Web and enforces your approved list with Shield Control — no dedicated security team required. See how it works

Get a live walkthrough of your AI exposure.

Every request is reviewed against your AI surface, control gaps, and rollout goals before the first call.

  • Scoped to your stack, workflows, and risk posture
  • Pilot-first rollout — no platform rip-and-replace required
  • Response from the Qadar team within 48 hours

Requests are reviewed by the Qadar team — response within 48 hours.