This guide walks you through cloning the repository, configuring your LLM provider, initializing the knowledge base, and running your first agent task against a target.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt
Use this file to discover all available pages before exploring further.
Clone and install
Clone the repository and install Python dependencies into a virtual environment.
- Linux / macOS
- Windows (WSL2)
Configure your environment
Copy the example configuration file and fill in your LLM credentials.Open You can also control console verbosity with See Environment variables for all available settings.
.env in your editor and set at minimum the following required values:- OpenAI
- Anthropic (Claude)
- DeepSeek / other OpenAI-compatible
OUTPUT_MODE:Initialize the knowledge base
LuaN1ao uses a RAG (Retrieval-Augmented Generation) system backed by FAISS to retrieve relevant attack payloads and techniques during testing. You must build the vector index before running the agent for the first time.
rag_kdprepare downloads embedding models and chunks all markdown files in knowledge_base/. This only needs to run once, or again when you add new knowledge documents. The RAG service starts automatically when you run the agent.Start the web server
The web dashboard is a standalone process that must be running before or alongside the agent. It persists all task data in Open your browser and navigate to http://localhost:8088 (default port). You should see the LuaN1aoAgent dashboard.
luan1ao.db and streams live updates via SSE.Run your first agent task
Open a new terminal window (keep the web server running in the first), activate your virtual environment, and launch the agent with a goal and task name.The You can also override the LLM model configuration per-run without editing
--goal flag describes the penetration testing objective in natural language. The --task-name flag sets the identifier used for logging and the Web UI display.To print the Web UI task URL after launch, add --web:.env:View results
While the agent runs, the Web UI at http://localhost:8088 shows:All task history is persisted in
- The live task graph evolving in real-time
- Node-by-node execution logs with state transitions
- Confirmed vulnerabilities and key findings as they are discovered
logs/:luan1ao.db, so you can review past runs from the Web UI even after restarting the server.CLI reference
The most commonly usedagent.py arguments:
| Flag | Required | Description |
|---|---|---|
--goal | Yes | The penetration testing objective in natural language |
--task-name | No | Task identifier for logging and Web UI (default: default_task) |
--output-mode | No | Console verbosity: simple, default, or debug |
--web | No | Print the Web UI task URL after launch |
--web-port | No | Web service port for display purposes (default: 8088) |
--llm-api-key | No | Override LLM_API_KEY from .env |
--llm-api-base-url | No | Override LLM_API_BASE_URL from .env |
--llm-planner-model | No | Override the model used by the Planner |
--llm-executor-model | No | Override the model used by the Executor |
--llm-reflector-model | No | Override the model used by the Reflector |
--mode | No | Execution mode: default (P-E-R), linear, or react |
--log-dir | No | Custom log directory path |
Next steps
Installation
Virtual environments, Docker setup, and troubleshooting common issues.
Environment variables
Full reference for all
.env configuration options.Web UI
Learn how to use the dashboard for task monitoring and human-in-the-loop control.
P-E-R architecture
Understand how the three agents collaborate to reason about your target.