Skip to content

Atlas AI: Local LLM inside Burp Suite

Atlas AI adds LLM-powered analysis to Burp Suite without sending data to the cloud. Built for offensive security teams who need full local control.

Screenshot of Atlas AI plugin running in Burp Suite, showing local LLM-based request analysis without cloud data transfer

Pentest data should never leave your machine. 

Burp Suite is a core tool for web application testing, but its AI integrations introduce serious data exposure risks. When you use features like “Ask AI,” Burp sends request and response data to external providers, typically via PortSwigger’s OpenAI integration.

For security teams working under NDA or within regulated environments, that’s unacceptable. Sensitive content like tokens, credentials, and business logic should remain local. Routing this through third-party APIs creates an unnecessary and avoidable risk surface.

Atlas AI removes that risk.

It provides an AI assistant that runs locally or within your own infrastructure. You choose the model, manage the endpoint, and control the data flow at every step. No cloud lock-in, no background telemetry, and no external calls—unless you explicitly configure them.

Atlas AI was built by offensive security practitioners to support real-world engagements. It adds the intelligence of modern language models to Burp Suite without compromising the operational integrity of your tests.

 

How it works

A local AI plugin that adds real intelligence to your Burp workflow without exposing your data. 

Atlas AI integrates directly into Burp Suite as a manual plugin. It supports both the Community and Pro editions, requiring no official BApp Store listing. It avoids dependency on Burp’s cloud model and lets you customise your own deployment.

Once installed, Atlas AI connects to a local or private LLM endpoint. You can point it to models running on tools like Ollama, LM Studio, or vLLM. It also works with internal cloud infrastructure such as AWS-hosted models. All that’s required is an accessible API and an auth key.

Atlas AI passes HTTP request and response data to the model in JSON format. Each function is mapped to a dedicated system prompt, whether it’s for analysing a response or generating an attack vector. The LLM returns structured output that Burp can parse and display immediately.

All processing happens where you define it. That could be a local GPU-powered machine, a hardened virtual appliance, or an internal LLM cluster. All traffic stays within your defined infrastructure, whether that’s a local machine or a private LLM endpoint.

Atlas AI was designed to operate like an extension of your team, not an extension of a vendor’s cloud.

 

What it can do

AI-powered analysis inside your HTTP workflow. 

Atlas AI integrates AI directly into your HTTP workflow. It adds right-click analysis tools and passive support for Burp Scanner findings without changing how you already work.

Core Functions: 

    • Analyze Request 
      Sends the raw HTTP request to your local model. The AI returns possible attack vectors based on method, headers, parameters, and payload structure.
    • Analyze Response 
      Evaluates the HTTP response for signs of misconfigurations, excessive information disclosure, or weak server behaviours. Useful for quickly surfacing exploitation paths post-request.
    • Explain Selection 
      Select any part of a request or response, such as a token, cookie, or base64 string—and Atlas AI will identify and explain what it is, how it works, and whether it’s exploitable.
    • Generate Attack Vectors 
      Sends both the request and response together to the model, allowing for more complex reasoning and contextual chaining. This mode costs more tokens but yields deeper output.
    • Scanner Findings AI 
      A passive module that watches Burp Scanner output and applies AI analysis to each identified issue. It suggests next steps and potential exploitation techniques as you browse.

Prompt-Based Intelligence
Each feature is backed by a system prompt tailored for the task, request parsing, response analysis, or vector generation. These prompts can be modified to fit your methodology or adapted for specific client contexts.

 

Models & Performance

Atlas AI is model-agnostic. It works with any language model that exposes an API endpoint and can process structured input and output in JSON. Atlas AI supports any LLM that fits your operational needs—whether it’s locally hosted or cloud-isolated.

Tested With: 

    • GPT-4 / GPT-3.5 (via OpenAI API)
      Best suited for complex analysis and production-grade testing. Used during development and debugging. Delivers high accuracy and low hallucination rates but requires sending data to a third-party provider. Not recommended for sensitive engagements unless data sharing is explicitly permitted.
    • LLaMA, Mistral, and other open models 
      Good results with recent versions. Smaller models may miss nuance or hallucinate under ambiguous inputs, but they’re viable for many testing scenarios.

Performance Notes: 

    • Larger, modern models produce more accurate and relevant output, especially for chained or subtle vulnerabilities.
    • Token cost and response latency vary by model, Atlas AI supports tuning around this.
    • If running locally, GPU memory and inference speed are key. Cloud-hosted internal deployments (e.g., on AWS or Azure) are a practical balance between performance and security.

 

What’s next

Focused improvements driven by real-world testing and feedback. 

Atlas AI is still evolving. While the current feature set covers core analysis and exploitation workflows, there are planned enhancements to increase flexibility, precision, and integration.

Upcoming Features: 

    • Manual Prompting Mode
      Enable direct interaction with the model. Users will be able to send custom queries alongside specific request or response data, allowing for more tailored analysis mid-engagement.
    • Expanded LLM Compatibility 
      Support for additional enterprise AI providers, including Microsoft and Google-hosted models, using non-standard authentication or response formats.
    • Deeper Context Handling 
      Future updates will focus on enabling longer context windows. This will allow the model to retain session state across multiple requests and build more complete exploitation paths.

Long-Term Vision: 

The goal is to enable semi-autonomous analysis where Atlas AI can independently identify, triage, and escalate vulnerabilities based on live traffic or scanner output. This requires continued advances in context management and reduced model latency, but the foundation is in place.

 

Getting started

Install in minutes. Keep control from the start. 

Atlas AI is open source and manually installed. There’s no dependency on Burp’s plugin store, and no external setup required beyond your chosen LLM backend.

Installation Steps: 

  1. Download the plugin
    Clone or download the latest release from GitHub: github.com/DIABL0-SEC/Atlas-AI
  2. Load into Burp Suite
    Go to Extender > Extensions, click Add, and select the plugin file. Works with both Community and Professional editions.
  3. Configure your LLM endpoint 
  • In the plugin config screen, input:
      • The base URL of your LLM (Ollama, LM Studio, vLLM, etc.)
      • Your API key or auth token
  • Choose a model suited to your environment.

4. Start testing 

  • Right-click any request or response in Burp
  • Select from the AI-powered options
  • Review suggestions, explanations, and attack vectors—all processed within your infrastructure

Actionable intelligence delivered entirely within your environment.