Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt

Use this file to discover all available pages before exploring further.

LuaN1aoAgent provides three console output modes and a prompt language switch. These settings affect what you see in the terminal during a run — they do not change the agent’s behaviour or the content of log files.

Setting the output mode

.env
# "simple", "default", or "debug"
OUTPUT_MODE=default

Mode comparison

simpledefaultdebug
Final result / flagYesYesYes
Task plan summaryNoYesYes
Per-step tool callsNoYesYes
Tool outputs (truncated)NoYesYes
Reflector analysisNoNoYes
Causal graph updatesNoNoYes
LLM request / response payloadsNoNoYes
Token usage per callNoNoYes
Full tool stdout (untruncated)NoNoYes

simple

Minimal output — only the final result or captured flag is printed. Suitable for automated pipelines, CI environments, or batch runs where you only care about the outcome.
OUTPUT_MODE=simple python agent.py --target http://192.168.1.10
Example output:
[+] Mission complete
Flag: CTF{3xf1ltr4t3d_s3cr3t}

default

Standard output for interactive use. Shows the current task plan, each executor step, tool names and arguments, truncated tool outputs, and a brief summary at the end of each P-E-R cycle. This is the recommended mode for most users.
OUTPUT_MODE=default python agent.py --target http://192.168.1.10
Example output:
[Planner] Building task graph...
[Task 1] Enumerate web directories
  [Step 1] dirsearch_scan(url="http://192.168.1.10", extensions="php,html")
  [Output]  200 /admin/login.php
           200 /backup/db.sql
[Reflector] Updating causal graph...
[+] Cycle 1 complete. 2 new artifacts.

debug

Verbose output equivalent to --verbose. Prints everything default shows plus full LLM request/response payloads, token usage per API call, reflector causal graph diffs, and untruncated tool output. Use this when diagnosing unexpected agent behaviour.
OUTPUT_MODE=debug python agent.py --target http://192.168.1.10
Debug mode can produce very large amounts of output for long runs. Consider redirecting to a file:
OUTPUT_MODE=debug python agent.py --target http://192.168.1.10 2>&1 | tee run.log

Prompt language

PROMPT_LANGUAGE controls the language used in all internal agent prompts — system messages, planning instructions, and reflector directives sent to the LLM.
.env
# "zh" (Chinese, default) or "en" (English)
PROMPT_LANGUAGE=en
ValueDescription
zhChinese prompts (default). Used in the original research and benchmarks.
enEnglish prompts.
PROMPT_LANGUAGE affects the language of the prompts sent to the LLM, not the language of console output or log files. Tool output and LLM responses will still appear in whatever language the model produces.
If you are using a model that performs better in English (e.g., GPT-4o, Claude), set PROMPT_LANGUAGE=en. For models fine-tuned on Chinese data (e.g., some DeepSeek variants), the default zh may yield better results.

Log files

Console output mode does not affect file logging. Logs are always written at full verbosity to the logs/ directory:
FileContents
logs/mcp_service.logMCP server tool execution events
logs/agent_*.logPer-run agent trace (created at run start)
To adjust the log level for file output, set the LOG_LEVEL environment variable:
.env
# DEBUG, INFO, WARNING, ERROR
LOG_LEVEL=INFO