"Your coding companion, rendered in pure terminal"
The TUI is Opta's primary interface — a full-screen Ink/React rendering inside your terminal. Three zones: a scrolling message timeline, a live streaming region, and a multi-line composer at the bottom. The layout uses Opta's obsidian glass design system adapted for TTY rendering via Ink's yoga-layout engine.
- Press
Tabin the composer to open the Model Picker overlay Ctrl+Ccancels an in-progress stream immediately- Arrow keys scroll the message timeline without leaving the composer
- The session ID in the header is your key for daemon API calls
"Rich text, code blocks, and tables — live in your terminal"
Opta CLI includes a zero-dependency Markdown renderer that converts assistant output to styled terminal content. Code blocks get syntax-highlighted cyan borders with a language label. Inline code uses neon-cyan styling. Headers are bolded with violet accent. Tables auto-align columns. All rendering is stateless and streams token-by-token.
- Code blocks render with language labels pulled from the fence marker
- Tables are automatically padded for terminal-width column alignment
- Blockquotes render with a violet left-border accent
- Streaming renders incrementally — output appears as tokens arrive
"Switch models mid-session with a single keypress"
Press Tab in the TUI composer to open the Model Picker overlay — a searchable list of all models configured across every provider. Recent models appear first. Selection updates the active session immediately with no restart. The current model is always visible in the status bar.
- Type to filter — fuzzy matching on model name and provider
- LMX models appear first if LMX is healthy
- The picked model persists for the current session only
- Use
opta config set default-modelto change the session default
"Two modes: think with it, or let it act"
Chat mode is conversational — you exchange messages and tool calls require your approval. Do mode (opta do) is agentic — the model orchestrates a full task loop: reads files, edits code, runs commands, browses the web, and iterates until done. The mode pill in the composer switches instantly, mid-session.
- Start in Chat to explore, switch to Do when the plan is clear
- Do mode respects permission policies — write once, reuse across sessions
Ctrl+Cpauses the loop (not kills it) — you can resumeopta do "task"starts directly in agentic mode from the shell
"The local HTTP backbone powering every session"
The Opta Daemon runs at 127.0.0.1:9999 and exposes an HTTP v3 REST API plus WebSocket streaming. CLI, Code Desktop, and the LMX Dashboard all connect to it. HTTP uses Bearer token auth; WebSocket uses ?token=T query param. The daemon auto-starts when you run any opta command.
opta daemon statuschecks if the daemon is running and shows its PIDopta daemon installregisters a launchd/systemd service for auto-start- Daemon token lives at
~/.config/opta/daemon/token - Multiple CLI instances share one daemon — sessions are isolated by ID
"Every conversation, safely persisted to JSONL"
Sessions are stored as append-only JSONL files under ~/.config/opta/daemon/sessions/. Each line is a timestamped event envelope. Sessions survive daemon restarts and can be replayed from any point using the event cursor (afterSeq). Use opta sessions list to browse all sessions with metadata.
- Session files are human-readable — pipe through
jqfor inspection - Resume any session with
opta tui --session <id> - Export to markdown with
opta sessions export <id> - Sessions are never auto-deleted — archive with
opta sessions prune
"Real-time tokens, tool calls, and turn events over WS"
Connect to ws://127.0.0.1:9999/v3/sessions/{id}/stream for live events. Events include turn.token (each streaming token), turn.done (final stats with token counts and speed), tool.call, tool.result, and turn.error. Reconnect with afterSeq to resume without re-delivery.
- Store
lastReceivedSeqon disconnect; pass asafterSeqon reconnect turn.doneincludestokens,elapsed, andtokensPerSec- Code Desktop subscribes to this stream for its live UI updates
- Third-party tools can build on this API — see the daemon interop contract
"Execute files, run commands, browse — with approval gates"
When the model requests a tool call, the daemon routes it through the permission gate before executing. Read operations (read_file, list_dir) auto-approve. Write/execute operations (write_file, run_command, browser_navigate) require confirmation unless a policy grants them. Tool results stream back to the model in the same turn.
- "Always Allow" adds a persistent policy entry for that tool
- In Do mode, safe read tools auto-approve without prompting
- Tool calls run in parallel when the model requests multiple (up to 8 concurrent)
- View the full tool call log with
opta sessions tools <id>
"Your Mac Studio as an always-on inference engine"
When LMX is configured, requests route to 192.168.188.11:1234 — your Mac Studio running MLX models. The daemon caches LMX preflight health checks for 10 seconds. If LMX is unreachable, the daemon automatically falls back to the configured cloud provider with no user action required.
- LMX serves OpenAI-compatible
/v1/chat/completions— any model name works - Set
OPTA_LMX_HOSTto override the default192.168.188.11 - Check LMX health with
opta doctor— includes LMX latency ping - LMX runs on Mono512 (Mac Studio M3 Ultra, 512GB RAM) for peak throughput
"LMX local, Anthropic cloud, OpenAI — one unified router"
The Provider Manager normalizes requests across LMX, Anthropic, and OpenAI-compatible providers. It handles streaming format differences, token counting normalization, and provider-specific error codes. Switching providers only requires changing the model name — no API surface changes in your session or daemon code.
- Provider priority: LMX first (if healthy), then cloud fallback order
- Add providers with
opta config add-provider - Each provider credential lives in the Vault — never in config files
- Provider normalization means the same streaming code works for all
"Watch the model reason before it responds"
For supported models (Claude 3.7+ and compatible LMX models), Opta streams thinking blocks interleaved with response tokens. Thinking blocks appear with an indented, muted violet treatment — collapsible via a toggle key. The TUI tracks thinking token counts separately in the turn stats so you can see the reasoning overhead.
- Enable with
opta config set thinking truefor a session - Thinking tokens count toward context — monitor for long sessions
- Collapse thinking blocks with the toggle to reduce visual noise
- Significantly improves accuracy on complex multi-step reasoning tasks
"Peekaboo: full web control via MCP"
Opta integrates with Peekaboo — an MCP server bridging to Playwright. When enabled, the model can navigate URLs, click elements, fill forms, take screenshots, extract page content, and evaluate JavaScript. Browser sessions are sandboxed per agent loop and tracked in the session store alongside other tool calls.
- Enable with
opta config set mcp.peekaboo true - Browser sessions are sandboxed — no persistent cookies between loops
- Use
browser_snapshotfor DOM analysis beforebrowser_click - Screenshots saved to
~/.config/opta/browser-snapshots/for review
"Multi-step autonomous task execution"
In Do mode, the model enters an agent loop: generate → tool calls → results → generate again, until the task is complete or a step limit is hit. The loop surfaces a live progress timeline in the TUI showing each step, its tool calls, and current status. Loops can be paused or cancelled at any point.
- Set
opta config set max-steps 20to adjust the step limit - The model self-corrects when tool calls fail — review in the timeline
- Pause with
Ctrl+C— progress is saved, resume withopta tui --session <id> - Use
opta do --watchfor desktop notifications on completion
"API keys stored in OS keychain, never in plaintext"
The Vault stores all credentials — API keys, bearer tokens, provider secrets — in the OS-native keychain. macOS Keychain on macOS, DPAPI/Credential Manager on Windows. Keys are retrieved at runtime and never written to config files or logs. The opta account command manages vault entries with full add/list/remove support.
- Keys are scoped to your OS user account — inaccessible to other users
opta account rotate anthropicreplaces a key without interrupting active sessions- Never set API keys as environment variables — the Vault is more secure
- Vault entries sync to other devices via the Opta Accounts sync layer
"Fine-grained control over every tool the model can invoke"
Every tool is categorized by risk level. Read operations always auto-approve. Write/execute operations require confirmation or a matching policy. Policy files in ~/.config/opta/policies/ can pre-approve specific tools, enabling fully autonomous runs in trusted project environments without per-call prompts.
- Create a project policy at
.opta/policy.yamlto allow tools per-project opta doctor --policyvalidates your policy files for syntax errors- Policies stack: project → user → defaults (project wins on conflict)
- In CI/CD, set
OPTA_POLICY=unrestrictedfor fully autonomous operation