LuaN1aoAgent runs as two separate processes: a persistent web server (the dashboard) and a short-lived agent worker (the task runner). They communicate through a shared SQLite database (Documentation Index
Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt
Use this file to discover all available pages before exploring further.
luan1ao.db), which means the web UI stays up across task restarts and retains full history.
Two-process architecture
Starting the web server
Start the dashboard first, before running any agent tasks. Keep this process running in its own terminal.Running agent tasks
Basic usage
Open a new terminal window (keep the web server running), then launch an agent task:Printing the task URL
Pass--web to have the agent print the direct URL to the task in the Web UI:
CLI reference
All arguments fromagent.py’s argparse configuration:
Core arguments
Core arguments
| Argument | Required | Default | Description |
|---|---|---|---|
--goal | Yes | — | The penetration testing objective for the agent |
--task-name | No | default_task | Task name used for logging and directory naming |
--log-dir | No | logs/<task-name>/<timestamp>/ | Override the default log output directory |
--op-id | No | auto-generated | Operation ID passed by the Web UI when creating tasks via the dashboard |
LLM configuration
LLM configuration
These flags override the corresponding
.env values for a single run.| Argument | Env equivalent | Description |
|---|---|---|
--llm-api-base-url | LLM_API_BASE_URL | Base URL for the LLM API |
--llm-api-key | LLM_API_KEY | API key for the LLM service |
--llm-planner-model | LLM_PLANNER_MODEL | Model for the Planner role |
--llm-executor-model | LLM_EXECUTOR_MODEL | Model for the Executor role |
--llm-reflector-model | LLM_REFLECTOR_MODEL | Model for the Reflector role |
--llm-default-model | LLM_DEFAULT_MODEL | Fallback model for other roles |
--llm-expert-model | LLM_EXPERT_MODEL | Model for the Expert Analysis role |
Output and display
Output and display
| Argument | Default | Description |
|---|---|---|
--output-mode | default | Console verbosity: simple, default, or debug |
--web | false | Print the Web UI task URL after the agent starts |
--web-port | 8088 | Web service port (display purposes only; does not start a server) |
Execution modes
Execution modes
| Argument | Value | Description |
|---|---|---|
--mode | default | Standard P-E-R architecture (Planner + Executor + Reflector) |
--mode | linear | Linear task chain without dynamic graph branching |
--mode | react | Pure ReAct mode — bypasses the P-E-R architecture and runs a single Executor loop with up to 50 steps |
Example commands
- Web security testing
- CTF challenge solving
- Network penetration testing
- Override models per-run
Understanding log output
During execution, the agent prints a structured Rich-formatted console output. The verbosity depends on--output-mode:
| Mode | What you see |
|---|---|
simple | Core task progress only |
default | Standard P-E-R cycle information and tool calls |
debug | Full LLM prompt/response details and all internal state |
Log file structure
Every run saves logs tologs/<task-name>/<timestamp>/:
Reading metrics.json
metrics.json contains aggregated statistics for the entire run:
cost_cny— total LLM API cost in CNYtool_calls— per-tool invocation counts across the entire runtotal_tokens— sum of prompt and completion tokens consumed
Reading run_log.json
run_log.json is an ordered list of all P-E-R events. Each entry records the role (planner, executor, reflector), the subtask ID, and the full input/output for that step. This is the primary file for post-hoc analysis and debugging.
Stopping a running task
Via the Web UI: Click the task in the sidebar, then click the Abort button. This sendsSIGKILL to the agent process group, including all MCP tool subprocesses.
Via the terminal: Press Ctrl+C in the terminal where the agent is running. The agent registers a SIGTERM handler that triggers a graceful shutdown and ensures logs are saved via the finally block.