AI security terms, defined for implementation teams.
Shared terminology for security, compliance, and AI teams building with Qadar AI Shield.
-
Agentic AI Risk
Agentic AI risk refers to the potential for autonomous AI systems to take unauthorized or harmful actions without direct human review.
Read definition -> -
AI Agent Security
AI agent security is the practice of controlling what autonomous AI systems can do, see, and act on at runtime. Learn how to govern agentic workflows.
Read definition -> -
AI Governance
AI governance defines the rules and controls for responsible AI use. Learn about AI policy, compliance frameworks, and auditable AI systems.
Read definition -> -
LLM Security
LLM security is the practice of protecting large language models and the systems that use them from manipulation, data theft, and abuse.
Read definition -> -
Prompt Injection
Prompt injection is an AI security threat where malicious instructions are embedded in content processed by an LLM. Learn how to prevent prompt injection attacks.
Read definition ->