Application Security in the Agentic Era – Building Safe AI Applications

As organizations rush to adopt AI-driven systems and agentic architectures, traditional application security paradigms are being challenged, and often outpaced, by new, AI-specific threats. In this talk, we’ll explore the emerging attack surface introduced by intelligent agents and generative AI applications, including prompt injection, embedding poisoning, data exfiltration through model outputs, and model manipulation through indirect prompt chaining.

You’ll learn how these attacks exploit the very mechanisms that make AI powerful, contextual reasoning, dynamic orchestration, and natural language interfaces, and why conventional security controls are no longer enough.

We’ll then shift from awareness to action, presenting proven mitigation patterns, architectural safeguards, and secure development practices for building resilient AI systems. Topics include the design of trust boundaries in multi-agent systems, prompt sanitization and isolation strategies, secure retrieval-augmented generation (RAG), and runtime threat detection for AI pipelines. By the end of this session, attendees will gain a deep understanding of how to secure the new AI application stack and the frameworks needed to confidently build safe, trustworthy agentic systems in the era of autonomous intelligence.