Documentation Index
Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt
Use this file to discover all available pages before exploring further.
Status overview
Completed
Human-in-the-Loop Mode is fully implemented and available in the current release.
Planned
Experience Self-Evolution, Tool Ecosystem Expansion, and Multimodal Capabilities are on the near-term roadmap.
Completed features
Human-in-the-Loop (HITL) mode
LuaN1aoAgent supports supervised operation, allowing security experts to review and intervene in the agent’s decision-making process in real time.Pre-high-risk operation confirmation
Pre-high-risk operation confirmation
Before executing any operation flagged as high-risk, the agent pauses and waits for explicit human approval. This prevents irreversible actions from being taken without oversight.Enable via
.env:Runtime task graph editing (Graph Injection)
Runtime task graph editing (Graph Injection)
Experts can inspect, modify, and inject new sub-tasks into the live task graph while the agent is running — without stopping the session. Modifications can be made through either the Web UI or the CLI.
- Web UI: An approval modal appears automatically after plan generation. Use “Modify” to edit the plan JSON directly, or “Add Task” to inject new sub-tasks.
- CLI: The agent pauses at
HITL >. Typeyto approve,nto reject, ormto open the system editor and modify the plan.
Expert intervention and strategy injection
Expert intervention and strategy injection
Security experts can inject domain knowledge, alternative hypotheses, or targeted instructions directly into the running plan. The Planner incorporates this guidance on its next planning cycle.
Planned features
Experience self-evolution
Persistent cross-task learning so the agent improves from every engagement it runs.Cross-task long-term memory
The agent will maintain a persistent memory store across separate tasks. Findings, failed approaches, and confirmed vulnerabilities from past engagements inform future ones.
Automatic extraction of successful attack patterns
When an exploit succeeds, the attack chain is automatically extracted, vectorized, and stored in the knowledge library. This creates a self-growing playbook.
Tool ecosystem expansion
Broader tool integration to cover more of the standard penetration testing toolkit.Metasploit RPC interface
Metasploit RPC interface
Native integration with the Metasploit Framework via its RPC API. The Executor will be able to invoke Metasploit modules directly as part of the tool chain, enabling exploitation of the full Metasploit module library.
Nuclei, Xray, and AWVS scanner support
Nuclei, Xray, and AWVS scanner support
Integration with industry-standard scanning tools:
- Nuclei — Template-based vulnerability scanning
- Xray — Passive vulnerability scanner
- AWVS — Web application vulnerability scanner
mcp.json. First-class support will include pre-built configurations and result parsing.Docker sandboxed tool execution
Docker sandboxed tool execution
A Docker-based sandbox environment for executing tools in isolation. This eliminates the host-system risk currently posed by
shell_exec and python_exec, making the agent safe to run outside of a dedicated VM.Multimodal capabilities
Extending the agent’s perceptual range beyond text-based HTTP traffic.Image recognition
CAPTCHA solving and screenshot analysis. The agent will be able to interpret visual elements on web pages, enabling it to bypass common bot-detection mechanisms and analyze rendered page state.
Traffic analysis
PCAP file parsing. The agent will be able to ingest raw network captures, reconstruct protocol-level interactions, and identify anomalies that are invisible at the HTTP application layer.
Long-term vision
These items represent research-grade goals beyond the near-term roadmap.Collaborative agent network
Collaborative agent network
Multi-agent distributed collaboration: a network of specialized agents operating in parallel — one scanning, one exploiting, one analyzing results — with shared state and coordinated task assignment. This would allow LuaN1aoAgent to scale horizontally across large, complex targets.
Reinforcement learning integration
Reinforcement learning integration
Autonomous optimization of attack strategies through environmental interaction. Rather than relying solely on LLM priors, agents would refine their decision policies through trial and feedback, achieving strategy convergence in complex scenarios over time.
Compliance report generation
Compliance report generation
Automatic generation of compliant penetration testing reports. After a task completes, the agent assembles a structured report from the causal graph — listing findings, evidence chains, severity assessments, and remediation guidance — in formats suitable for regulatory and client delivery.
Roadmap items are subject to change. To suggest a feature or follow development, visit GitHub Issues or GitHub Discussions.