.:. OPTA CODE ':'
reference Guide
CLI Masterclass
A comprehensive deep dive into the Opta CLI. Master interactive chat, autonomous task execution, and local daemon orchestration right from your terminal.
Updated 2026-03-02
Ecosystem Role
The Opta CLI is your primary interface to the Opta Local stack. It provides interactive AI chat, autonomous task execution, model management, session control, and daemon lifecycle commands. It seamlessly connects to the local daemon, which orchestrates sessions and proxies requests to Opta LMX for inference.
CLI
Terminal
D
Daemon
LMX
Inference
Architecture & The Daemon
Under the hood, the CLI doesn't perform inference itself. Instead, it connects to the Opta Daemon (running in the background), which manages long-lived sessions, tool permissions, and communication with the LMX server. You can control the daemon directly using commands like
opta daemon start or view its health via opta status.Two Modes of Operation: Chat vs. Do
The CLI operates in two fundamental modes designed for different workflows. Chat Mode is an interactive conversation session. You type messages, the model streams back responses token-by-token in real time, and you manually approve tool executions. Do Mode runs an autonomous agent loop. It takes a natural-language task, creates a plan, and automatically executes safe tools until the task is complete.
Chat Mode
Interactive, user-steered, prompt-driven.
Do Mode
Autonomous, agentic loop, goal-driven.
Use opta chat to explore and steer the conversation manually. Use opta do "task" when you have a well-defined goal and want the AI to execute it with minimal interruption.
Working with Tools
Both chat and do modes have access to the same powerful toolset. Read-only tools like
read_file and search_files are auto-approved by default. Destructive tools like write_file or run_command trigger a Permission Prompt where you can approve, deny, or auto-approve for the remainder of the session.Permission Gateway
read_file
Auto-approved
run_shell_command
Requires Approval
Tool: write_file
Path: src/auth/validate.ts
Content: (47 lines)
[A]pprove [D]eny [A]ll for this tool [Q]uitContext Constellations
Understanding what the AI "sees" is critical. Use
opta context map to generate a live Context Constellation. This visualizes exactly which files, terminal buffers, and API documentation are currently loaded into the AI's active memory, along with their associated token weight. It helps you trim unnecessary context to improve latency.LMX
app.tsx
4.2k tkns
Terminal #1
840 tkns
auth.js
12k tkns
Context Pressure
78%
Time-Travel Introspection
Autonomous execution involves risk. Opta CLI mitigates this using Time-Travel Introspection (
opta rewind). Before triggering a chain of tools in Do Mode, the Daemon captures an ephemeral snapshot of your workspace. If the agent hallucinates or goes down a destructive path, you can instantly roll back not just the files, but the AI's internal short-term memory to the exact moment before divergence.10:42:01
Created AuthGuard
10:42:05
Snapshot Saved
10:42:15
Deleted layout.tsx
opta rewind · Reverted to 10:42:05
Local Swarm Sub-Agents
For massive refactoring tasks, a single thread is a bottleneck. By appending
--swarm to your command, the CLI orchestrates multiple local models via LMX. A Director Agent breaks down the prompt into sub-tasks, spins up 2-3 worker agents (e.g., DeepSeek for business logic, Llama 3 for unit tests), and merges their outputs asynchronously—all executing privately on your metal.
[DIRECTOR]
> Analyzing prompt...
> Spawning workers...
> Assigning Task A
> Assigning Task B
[01_LOGIC]
> Reading auth.ts...
> Generating JWT...
_
[02_TESTS]
> Awaiting 01_LOGIC...
opta do --swarm "Refactor the authentication flow to use JWTs instead of session cookies, and write unit tests for the new middleware."Holographic Daemon Dashboard
Keeping track of resource constraints is vital when running local LLMs. Run
opta top to open the Holographic TUI. It provides a real-time, high-density stream of active session memory banks, context shifting overhead, and VRAM pressure—rendering the normally invisible daemon operations into a beautiful, matrix-like control center.
OPTA-TOP v2.1.0
UP: 04:12:33
VRAM PRESSURE [LMX]
REQ / SEC
142.4
| PID | SESSION | MODEL | TOK/S |
|---|---|---|---|
| 9102 | ctx-map | deepseek-r1 | 45.2 |
| 8841 | idle-worker | llama3-8b | 0.0 |
The Opta Browser & Visual Automation
Agentic tasks frequently require interacting with the outside world. By running
opta do --browser, the CLI spins up an automated Chromium instance equipped with a custom Opta Chrome Overlay. The agent natively navigates websites, extracts DOM data, and interacts with complex UIs just like a human, while providing a clear visual indicator of its active operations.github.com/optalocal
Agent Active: click()
Atpo: Critical Code Analyzer
To ensure maximum code quality, invoke Opta's partner persona via
/atpo. Atpo is an abrasive, hyper-critical analyzer that evaluates your repository across 5 rigorous dimensions (Performance, Quality, Consistency, Architecture, Security) and generates a structured GenUI HTML report before handing execution targets back to Opta.AT
OPTA GENUI REPORT
2
Critical
7
High
14
Medium
5
Minor
auth.ts:142
Race condition detected in session token refresh logic. High risk of 401s under load.
Opta Accounts & Identity
Opta CLI integrates directly with
accounts.optalocal.com. Run opta login to securely pull your Cloud identity, LMX API keys, tool configurations, and autonomy presets into your local machine. This guarantees a unified session experience across all Opta Local apps via Supabase SSO.Browser Auth Callback
Secure Token Sync
CEO Mode Orchestration
When solving massively complex tasks, the standard agent loop may get lost in the weeds. Run
opta do --ceo to shift the AI's role from "worker" to "executive director". In CEO Mode, Opta does not write code. It drafts architectural strategies, spawns specialized child agents (e.g., Designer, Database, DevOps), reviews their PRs, and dictates the overall merge timeline.Director
CEO-Agent
Sub-Task Alpha
UX/UI
Sub-Task Beta
Database
Sub-Task Gamma
Infra
Long-term Autonomy (1hr+ Runs)
Opta is designed for endurance. While most agents crash or loop infinitely when left unattended, the Opta CLI utilizes advanced error-recovery state machines, self-correction protocol limits, and exponential backoff loops. You can assign a 60-minute background refactor, step away, and trust the daemon to navigate
npm ERR! traps safely without nuking your directory.48MIN
Autonomy Loop
Opta Benchmark Suite
Evaluate local hardware capabilities via the integrated benchmarking suite (
opta bench). Built on a local Fastify and React runtime, it performs adversarial stress-tests comparing Llama3 against DeepSeek across metrics like TOK/S, latency-to-first-token (TTFT), and structural logic consistency under heavy payload tasks like AI News synthesis.
> opta bench --suite heavy
RUNNING... [02/05]
DEEPSEEK-R1 (LMX)92.4 TOK/S
LLAMA-3-8B (LMX)114.2 TOK/S
GPT-4O-MINI (CLOUD)45.1 TOK/S
Session & Context Management (RAG)
The CLI automatically serializes the semantic intent of your completed sessions into local vector storage. During future, unrelated tasks, if the Daemon detects similar stack traces or error archetypes, it uses Retrieval-Augmented Generation (RAG) to quietly inject your past successful debugging steps into the agent's prompt, saving countless tokens.
ACTIVE ID: e8f...
Past Fixed Bug #1
PR Draft #12
LMX Model Management
You do not need a secondary application to orchestrate your local weights. You can securely pull HuggingFace models (e.g., GGUF format) and configure system prompts directly through the CLI lifecycle commands like
opta models pull <repo/name>. The daemon streams the multi-gigabyte files efficiently to your centralized cache path.$ opta models pull deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Resolved manifest...
Checked local cache...
model-00001-of-00008.safetensors
2.1GB / 4.8GB (44%)
##################------------------------
24 MB/s
Conclusion: The Local Development Orchestrator
The Opta CLI is not just a chat interface; it is a hyper-capable, local orchestration engine. It unifies all these disparate features—interactive steerage (Chat mode), autonomous agency (Do mode), Time-Travel rewind, multi-agent swarms, the ATPO critical analyzer, visual DOM interactions, and executive CEO mode—into a single terminal experience. The ultimate purpose of the CLI is to transform your local machine into a private, zero-latency software factory where models execute massive refactors natively alongside you without compromising security or context awareness.
CLI
The Ultimate Local AI Engine
Interactive Chat
Autonomous Do
Time-Travel Rewind
Local Swarms
Atpo Analyzer
CEO Mode
LMX Backends