Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt

Use this file to discover all available pages before exploring further.

Every agent run produces a structured log directory and persists task data to a SQLite database for Web UI consumption.

Log directory structure

logs/{task-name}/{timestamp}/
├── run_log.json        # Complete execution log with all P-E-R interactions
├── metrics.json        # Performance metrics and task statistics
└── console_output.log  # Formatted console output (mirrors stdout)
The default path is logs/{task-name}/{timestamp}/ where timestamp is formatted YYYYMMDD_HHMMSS. Override with --log-dir.
All files are written via atomic rename (tempfile + os.replace) to prevent corruption if the process is interrupted mid-write.

metrics.json

A JSON object updated after each P-E-R cycle and finalized on task completion. Intermediate snapshots are written throughout the run so in-progress metrics are always readable.

Monotonic key protection

Before each write, the file is read and compared against the in-memory values. For the keys cost_cny, total_tokens, prompt_tokens, and completion_tokens, the higher of the two values is always kept. This prevents a stale in-memory snapshot from overwriting fresher data written by a parallel executor subtask.

Fields

task_name
string
The value passed to --task-name.
task_id
string
Unique operation ID in the format task_{timestamp}_{uuid_prefix}, or the value passed to --op-id.
start_time
number
Unix timestamp (float) at which the agent started.
end_time
number
Unix timestamp (float) at task completion or termination. null while the task is running.
total_time_seconds
number
Elapsed time in seconds. Updated on every metrics write.
total_tokens
number
Cumulative token count across all LLM calls.
prompt_tokens
number
Tokens consumed by all input prompts.
completion_tokens
number
Tokens generated by all completions.
cost_cny
number
Total LLM cost in CNY (Chinese Yuan). Calculated per-call by the LLM client based on model pricing.
tool_calls
object
Dictionary mapping tool name to total invocation count across the entire run.
{
  "http_request": 42,
  "shell_exec": 15,
  "think": 8,
  "sqlmap_tool": 2
}
execution_steps
number
Total number of executor steps taken across all subtasks.
plan_steps
number
Number of Planner invocations (initial plan + dynamic re-plans).
reflect_steps
number
Number of Reflector invocations (one per completed subtask, plus the final global reflection).
success
boolean
true if the agent determined the top-level goal was achieved.
success_info
object
artifacts_found
number
Count of nodes in the causal graph at task end. Reflects the number of evidence, hypothesis, vulnerability, and exploit nodes accumulated.
causal_graph_nodes
array
Full list of causal graph node tuples [node_id, node_data] at task end. Each node_data object contains node_type, description, confidence (for hypotheses), and other type-specific fields.
deployment_time
number
Seconds elapsed between process start and GraphManager initialization. Measures agent bootstrap overhead.
termination_reason
string
Present only if the task was terminated by a resource circuit breaker.
ValueMeaning
global_max_cycles_exceededReached the GLOBAL_MAX_CYCLES limit (default: 50)
global_token_limit_exceededExceeded GLOBAL_MAX_TOKEN_USAGE (default: 5,000,000)
error
string
Error message if the task terminated with an unhandled exception. Absent on successful runs.
ablation_mode
string
Present only in ReAct mode runs. Value: "react".

Example

{
  "task_name": "web_test",
  "task_id": "task_1704067200_a1b2c3d4",
  "start_time": 1704067200.123,
  "end_time": 1704068400.456,
  "total_time_seconds": 1200.333,
  "total_tokens": 284500,
  "prompt_tokens": 230000,
  "completion_tokens": 54500,
  "cost_cny": 1.42,
  "tool_calls": {
    "http_request": 38,
    "think": 12,
    "sqlmap_tool": 1,
    "shell_exec": 9
  },
  "execution_steps": 47,
  "plan_steps": 4,
  "reflect_steps": 6,
  "success": true,
  "success_info": {
    "found": true,
    "reason": "Global mission accomplished signal received from Planner."
  },
  "artifacts_found": 14,
  "deployment_time": 2.1
}

run_log.json

An append-only JSON array. Each element is a log entry appended by the main P-E-R loop. The array is rewritten atomically on each save.

Entry types

Written once at startup.
{
  "event": "task_initialized",
  "task_id": "task_1704067200_a1b2c3d4",
  "goal": "Test http://target.com for SQL injection",
  "timestamp": 1704067200.5
}
Written after the first Planner call. data contains the raw list of graph operations returned by the Planner.
{
  "event": "initial_plan",
  "data": [ { "op": "add_node", "node": { "id": "recon", "description": "..." } } ],
  "metrics": { "prompt_tokens": 1500, "completion_tokens": 300, "plan_steps": 1 },
  "timestamp": 1704067210.2
}
Written after each subsequent Planner call. data contains the full plan response including graph_operations, global_mission_accomplished, and global_mission_briefing.
Written after each subtask finishes execution.
{
  "event": "executor_cycle_completed",
  "subtask_id": "recon",
  "status": "completed",
  "metrics": { "execution_steps": 8, "prompt_tokens": 12000 },
  "timestamp": 1704067450.8
}
Written after the Reflector finishes auditing a subtask. Contains the full reflection output including audit_result, key_findings, causal_graph_updates, and key_facts.
Written once after the main loop exits. Contains the Reflector’s final global audit of the entire run.

SQLite database (luan1ao.db)

All task data is persisted concurrently to a SQLite database for Web UI consumption. The database survives process restarts — session history is preserved across runs.

What is stored

  • Sessions / tasks — one row per op_id, with name, goal, status, and timestamps.
  • Graph nodes and edges — both task DAG nodes and causal graph nodes, upserted after every modification.
  • Log eventsllm.*, execution.*, and graph.changed events are persisted for real-time streaming.

Node sync behavior

Nodes are synced to the database asynchronously via schedule_coroutine. Writes use atomic upsert (atomic_upsert_graph_data) to batch multiple nodes and edges in a single transaction, reducing SQLite contention under parallel subtask execution.

Real-time monitoring

Web UI

When the Web service (python web/server.py) is running, the frontend polls the database via Server-Sent Events (SSE) for live updates. Start the Web service before starting the agent, then launch the agent with --web to print the session URL.
# Terminal 1: start web service
python web/server.py

# Terminal 2: start agent
python agent.py \
  --goal "Test http://target.com" \
  --task-name "demo" \
  --web
The agent prints the session URL on startup:
http://127.0.0.1:8088/?op_id=task_1704067200_a1b2c3d4

Console output modes

ModeWhat is printed
simpleTask graph structure and key facts only
defaultPlan summaries, reflection results, causal graph changes
debugFull LLM prompt and response text, all graph operations
Console output is simultaneously written to console_output.log in the log directory.

Graceful shutdown and log safety

The agent registers SIGINT and SIGTERM handlers that trigger a clean shutdown via Python’s sys.exit(0). The finally block in main() guarantees a final log save regardless of how the process exits — including keyboard interrupt, signal, or unhandled exception. Halt signal files (/tmp/{task_id}.halt) written by complete_mission are cleaned up in the same finally block.