LuaN1aoAgent supports two LLM backends: any OpenAI-compatible API (including DeepSeek, local proxies, etc.) and the Anthropic Claude native API. The backend is selected withDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt
Use this file to discover all available pages before exploring further.
LLM_PROVIDER.
Provider selection
.env
OpenAI and compatible APIs
SetLLM_PROVIDER=openai (or omit it — this is the default). The client sends requests to LLM_API_BASE_URL using the OpenAI /chat/completions format.
- OpenAI
- DeepSeek
- Local proxy / vLLM
.env
Anthropic Claude
SetLLM_PROVIDER=anthropic. The client switches to the Anthropic Messages API and uses the ANTHROPIC_* family of variables.
.env
When
LLM_PROVIDER=anthropic, the LLM_* model variables are ignored. You must configure models via the ANTHROPIC_* model variables.ANTHROPIC_API_KEY defaults to the value of LLM_API_KEY when not set explicitly. You only need to add it if the two keys differ.Per-role model configuration
The agent runs four distinct LLM roles. Assigning separate models per role lets you balance capability, speed, and cost.| Role | Variable (OpenAI) | Variable (Anthropic) | Recommended model |
|---|---|---|---|
| Planner | LLM_PLANNER_MODEL | ANTHROPIC_PLANNER_MODEL | Strongest available — builds the full attack task graph |
| Executor | LLM_EXECUTOR_MODEL | ANTHROPIC_EXECUTOR_MODEL | Fast, reliable — executes tools step-by-step |
| Reflector | LLM_REFLECTOR_MODEL | ANTHROPIC_REFLECTOR_MODEL | Deterministic — performs causal graph analysis |
| Expert Analysis | LLM_EXPERT_MODEL | ANTHROPIC_EXPERT_MODEL | Strong reasoning — handles escalated hard problems |
Example: split model configuration
.env
Temperature settings
Temperature values are hard-coded per role inconf/config.py and represent the recommended defaults. They can be tuned directly in the source if your use case requires it.
| Role | Default temperature | Rationale |
|---|---|---|
default | 0.3 | Balanced fallback |
planner | 0.5 | Some creativity needed for diverse attack strategies |
executor | 0.3 | Stable, reliable tool calls |
reflector | 0.2 | Precise causal analysis |
expert_analysis | 0.7 | Creative problem-solving for hard escalations |
summarizer | 0.2 | Stable, concise output |
reflector_validator | 0.1 | High determinism for binary yes/no judgements |
planner_crisis_expert | 0.4 | Balance between stability and exploration during crisis re-planning |
Thinking mode
Thinking mode passesextra_body: {thinking: "hidden|visible"} in the request payload, allowing providers that support extended reasoning (e.g., DeepSeek R1) to expose their chain-of-thought.
| Value | Effect |
|---|---|
off | No extra_body injected (default) |
hidden | Reasoning enabled; chain-of-thought not returned in the response |
visible | Reasoning enabled; chain-of-thought returned (e.g., in reasoning_content) |
Thinking mode is only supported by providers that accept
extra_body. It is ignored when LLM_PROVIDER=anthropic, which uses the native Anthropic API format.Fallback API key
The agent automatically switches to a fallback key on429 rate-limit errors, then reverts to the primary key for subsequent requests.
.env
Model recommendations
DeepSeek V3 (benchmark model)
DeepSeek V3 (benchmark model)
DeepSeek V3 (
deepseek-chat) was the primary model used in the published benchmark evaluation. It offers a strong price-to-performance ratio for penetration testing workloads..env
GPT-4o
GPT-4o
GPT-4o provides strong reasoning and reliable JSON-mode output. It is a safe default for all roles.
.env
Claude 3.5 Sonnet
Claude 3.5 Sonnet
Claude 3.5 Sonnet is a strong all-round model for both planning and execution. Use
LLM_PROVIDER=anthropic with the Anthropic API, or point LLM_API_BASE_URL at an OpenAI-compatible Claude proxy..env