· Security
The Terminology Problem Causing Security Teams Real Risks
Jailbreaks target the model's safety training; prompt injection hijacks application trust boundaries. Conflating them leads to defenses that miss your actual threat surface.
Insights on AI security, LLM safety, prompt injection prevention, and secure development practices.
Jailbreaks target the model's safety training; prompt injection hijacks application trust boundaries. Conflating them leads to defenses that miss your actual threat surface.
Anatomy of an Indirect Prompt Injection
A vanilla JavaScript solution for detecting when streaming responses from Large Language Models have completed