LLM Backdoors in Plain Sight: The Threat of Poisoned Templates
OWASP Global AppSec 2025
Washington, DC · November 6, 2025
About This Talk
This presentation explores supply chain vulnerabilities in AI systems, demonstrating how malicious actors can poison open-source LLM templates to execute backdoor attacks. The research reveals critical security gaps in the AI development ecosystem and provides actionable recommendations for organizations deploying LLM-based applications.
Key Topics
- Understanding AI supply chain attack vectors
- How poisoned templates can compromise LLM applications
- Detection and mitigation strategies
- Best practices for secure AI development
Also Presented At
This research was also presented at BSidesTLV 2025 in Tel-Aviv, Israel on December 11, 2025.