Atlas AI: Local LLM inside Burp Suite
Atlas AI adds LLM-powered analysis to Burp Suite without sending data to the cloud. Built for offensive security teams who need full local control.
Atlas AI adds LLM-powered analysis to Burp Suite without sending data to the cloud. Built for offensive security teams who need full local control.
Burp Suite is a core tool for web application testing, but its AI integrations introduce serious data exposure risks. When you use features like “Ask AI,” Burp sends request and response data to external providers, typically via PortSwigger’s OpenAI integration.
For security teams working under NDA or within regulated environments, that’s unacceptable. Sensitive content like tokens, credentials, and business logic should remain local. Routing this through third-party APIs creates an unnecessary and avoidable risk surface.
Atlas AI removes that risk.
It provides an AI assistant that runs locally or within your own infrastructure. You choose the model, manage the endpoint, and control the data flow at every step. No cloud lock-in, no background telemetry, and no external calls—unless you explicitly configure them.
Atlas AI was built by offensive security practitioners to support real-world engagements. It adds the intelligence of modern language models to Burp Suite without compromising the operational integrity of your tests.
Atlas AI integrates directly into Burp Suite as a manual plugin. It supports both the Community and Pro editions, requiring no official BApp Store listing. It avoids dependency on Burp’s cloud model and lets you customise your own deployment.
Once installed, Atlas AI connects to a local or private LLM endpoint. You can point it to models running on tools like Ollama, LM Studio, or vLLM. It also works with internal cloud infrastructure such as AWS-hosted models. All that’s required is an accessible API and an auth key.
Atlas AI passes HTTP request and response data to the model in JSON format. Each function is mapped to a dedicated system prompt, whether it’s for analysing a response or generating an attack vector. The LLM returns structured output that Burp can parse and display immediately.
All processing happens where you define it. That could be a local GPU-powered machine, a hardened virtual appliance, or an internal LLM cluster. All traffic stays within your defined infrastructure, whether that’s a local machine or a private LLM endpoint.
Atlas AI was designed to operate like an extension of your team, not an extension of a vendor’s cloud.
Atlas AI integrates AI directly into your HTTP workflow. It adds right-click analysis tools and passive support for Burp Scanner findings without changing how you already work.
Core Functions:
Prompt-Based Intelligence
Each feature is backed by a system prompt tailored for the task, request parsing, response analysis, or vector generation. These prompts can be modified to fit your methodology or adapted for specific client contexts.
Atlas AI is model-agnostic. It works with any language model that exposes an API endpoint and can process structured input and output in JSON. Atlas AI supports any LLM that fits your operational needs—whether it’s locally hosted or cloud-isolated.
Tested With:
Performance Notes:
Atlas AI is still evolving. While the current feature set covers core analysis and exploitation workflows, there are planned enhancements to increase flexibility, precision, and integration.
Upcoming Features:
Long-Term Vision:
The goal is to enable semi-autonomous analysis where Atlas AI can independently identify, triage, and escalate vulnerabilities based on live traffic or scanner output. This requires continued advances in context management and reduced model latency, but the foundation is in place.
Atlas AI is open source and manually installed. There’s no dependency on Burp’s plugin store, and no external setup required beyond your chosen LLM backend.
Installation Steps:
4. Start testing
Actionable intelligence delivered entirely within your environment.