
ARTIFICIAL INTELLIGENCE. REAL RISK.
AI is rewriting the rules of business and security.
As adoption accelerates, so do the risks.
Join us for our webinar on October 23
at 2pm UK• 9am EST
LLM Prompt Injection Attacks in the Wild.
As large language models reshape business operations, they’re also creating fresh pathways for attackers.
This session dives into one of the most dangerous emerging threats: prompt injection. Learn how real-world attackers are exploiting AI systems, and what you can do to stop them.
What you’ll learn
- Why LLM prompt injection is one of the most urgent security risks in AI adoption.
- How attackers manipulate prompts to exfiltrate sensitive data and bypass enterprise controls.
- Breakdown of OWASP LLM01:2025: real-world risks you can’t ignore.
- Practical defences: from input validation to red team simulation.
- Live Q&A: bring your AI security concerns—we’ll tackle them live.
Speakers
James Dale, Head of Adversarial Simulation, CovertSwarm
Ibai Castells, Senior Hive Member, CovertSwarm
Pablo Sanchez, Senior Hive Member, CovertSwarm
Register now
Can’t make it live? Register and we’ll send you the full recording.