Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt

Use this file to discover all available pages before exploring further.

LuaN1aoAgent supports two LLM backends: any OpenAI-compatible API (including DeepSeek, local proxies, etc.) and the Anthropic Claude native API. The backend is selected with LLM_PROVIDER.

Provider selection

.env
# "openai" (default) or "anthropic"
LLM_PROVIDER=openai

OpenAI and compatible APIs

Set LLM_PROVIDER=openai (or omit it — this is the default). The client sends requests to LLM_API_BASE_URL using the OpenAI /chat/completions format.
.env
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_API_BASE_URL=https://api.openai.com/v1

LLM_DEFAULT_MODEL=gpt-4o
LLM_PLANNER_MODEL=gpt-4o
LLM_EXECUTOR_MODEL=gpt-4o
LLM_REFLECTOR_MODEL=gpt-4o
LLM_EXPERT_MODEL=gpt-4o

Anthropic Claude

Set LLM_PROVIDER=anthropic. The client switches to the Anthropic Messages API and uses the ANTHROPIC_* family of variables.
.env
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_API_BASE_URL=https://api.anthropic.com/v1/messages
ANTHROPIC_VERSION=2023-06-01

ANTHROPIC_DEFAULT_MODEL=claude-sonnet-4-5
ANTHROPIC_PLANNER_MODEL=claude-sonnet-4-5
ANTHROPIC_EXECUTOR_MODEL=claude-sonnet-4-5
ANTHROPIC_REFLECTOR_MODEL=claude-sonnet-4-5
ANTHROPIC_EXPERT_MODEL=claude-sonnet-4-5
When LLM_PROVIDER=anthropic, the LLM_* model variables are ignored. You must configure models via the ANTHROPIC_* model variables.
ANTHROPIC_API_KEY defaults to the value of LLM_API_KEY when not set explicitly. You only need to add it if the two keys differ.

Per-role model configuration

The agent runs four distinct LLM roles. Assigning separate models per role lets you balance capability, speed, and cost.
RoleVariable (OpenAI)Variable (Anthropic)Recommended model
PlannerLLM_PLANNER_MODELANTHROPIC_PLANNER_MODELStrongest available — builds the full attack task graph
ExecutorLLM_EXECUTOR_MODELANTHROPIC_EXECUTOR_MODELFast, reliable — executes tools step-by-step
ReflectorLLM_REFLECTOR_MODELANTHROPIC_REFLECTOR_MODELDeterministic — performs causal graph analysis
Expert AnalysisLLM_EXPERT_MODELANTHROPIC_EXPERT_MODELStrong reasoning — handles escalated hard problems

Example: split model configuration

.env
# Use the strongest model for planning and expert analysis
LLM_PLANNER_MODEL=gpt-4o
LLM_EXPERT_MODEL=gpt-4o

# Use a faster model for high-frequency execution steps
LLM_EXECUTOR_MODEL=gpt-4o-mini
LLM_REFLECTOR_MODEL=gpt-4o-mini

Temperature settings

Temperature values are hard-coded per role in conf/config.py and represent the recommended defaults. They can be tuned directly in the source if your use case requires it.
RoleDefault temperatureRationale
default0.3Balanced fallback
planner0.5Some creativity needed for diverse attack strategies
executor0.3Stable, reliable tool calls
reflector0.2Precise causal analysis
expert_analysis0.7Creative problem-solving for hard escalations
summarizer0.2Stable, concise output
reflector_validator0.1High determinism for binary yes/no judgements
planner_crisis_expert0.4Balance between stability and exploration during crisis re-planning
Temperature is not configurable via environment variable. Edit LLM_TEMPERATURES in conf/config.py to change these values.

Thinking mode

Thinking mode passes extra_body: {thinking: "hidden|visible"} in the request payload, allowing providers that support extended reasoning (e.g., DeepSeek R1) to expose their chain-of-thought.
1

Enable extra_body

.env
LLM_EXTRA_BODY_ENABLED=true
2

Set the thinking mode

.env
# Apply to all roles at once
LLM_DEFAULT_THINKING=hidden

# Or override individual roles
LLM_PLANNER_THINKING=visible
LLM_EXECUTOR_THINKING=off
ValueEffect
offNo extra_body injected (default)
hiddenReasoning enabled; chain-of-thought not returned in the response
visibleReasoning enabled; chain-of-thought returned (e.g., in reasoning_content)
Thinking mode is only supported by providers that accept extra_body. It is ignored when LLM_PROVIDER=anthropic, which uses the native Anthropic API format.

Fallback API key

The agent automatically switches to a fallback key on 429 rate-limit errors, then reverts to the primary key for subsequent requests.
.env
# OpenAI-compatible fallback
LLM_FALLBACK_API_KEY=sk-backup-...

# Anthropic fallback
ANTHROPIC_FALLBACK_API_KEY=sk-ant-backup-...
If no fallback key is configured, the client uses exponential back-off (10 s, 20 s, 40 s … up to 120 s) with up to 10 retries before raising an error.

Model recommendations

DeepSeek V3 (deepseek-chat) was the primary model used in the published benchmark evaluation. It offers a strong price-to-performance ratio for penetration testing workloads.
.env
LLM_PROVIDER=openai
LLM_API_BASE_URL=https://api.deepseek.com/v1
LLM_API_KEY=sk-...
LLM_DEFAULT_MODEL=deepseek-chat
GPT-4o provides strong reasoning and reliable JSON-mode output. It is a safe default for all roles.
.env
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_DEFAULT_MODEL=gpt-4o
Claude 3.5 Sonnet is a strong all-round model for both planning and execution. Use LLM_PROVIDER=anthropic with the Anthropic API, or point LLM_API_BASE_URL at an OpenAI-compatible Claude proxy.
.env
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_DEFAULT_MODEL=claude-3-5-sonnet-20240620