LLM security is the set of technical controls and practices used to protect large language models (LLMs) and the applications that rely on them from adversarial attacks, data leakage, and unauthorized behavior. It spans model-level safety training, input/output filtering, and runtime policy enforcement.
LLM Security
LLM security is the practice of protecting large language models and the systems that use them from manipulation, data theft, and abuse.