Centralized SSO for the entire optalocal.com ecosystem — OAuth, vault encryption, device control plane, and CLI auth in one system.
Supabase handles the OAuth provider integration. When a user clicks "Sign in with Google", Supabase generates a PKCE code challenge and redirects to Google's authorization endpoint. After consent, Google redirects back to /auth/callback with a code, which is exchanged for a Supabase session.
# Auth callback route (simplified) # GET /auth/callback?code=…&next=/dashboard supabase.auth.exchangeCodeForSession(code) → Supabase issues access_token + refresh_token # Cookies set on response: Set-Cookie: sb-access-token=…; domain=.optalocal.com; secure; sameSite=lax; httpOnly # Redirect user to their destination sanitizeRedirect(next) # validates vs allowlist → 302 to /dashboard (or safe fallback /)
When you run opta account login, the CLI binds a temporary HTTP server on localhost:9998. It then opens the accounts portal with ?redirect_to=http://localhost:9998/callback. After OAuth, the portal redirects to the CLI's local listener with the session token.
# CLI flow opta account login ◆ Starting local callback server on :9998… ◆ Opening accounts.optalocal.com in browser… # User completes Google/Apple/email auth ◆ Callback received: ?token=opta_tok_01J5… ✓ Authenticated as [email protected] ✓ Token stored in OS keychain # Token is now used for all daemon requests Authorization: Bearer opta_tok_01J5…
redirect_to=http://localhost:9998 is validated against the accounts app allowlist before the OAuth flow starts. The token is only ever sent to the loopback interface — never over the network.
The @supabase/ssr package splits the Supabase client into three variants: browser client, server client, and middleware. Cookies are managed by the middleware on every request, transparently refreshing expired access tokens using the refresh token.
# Cookie architecture sb-access-token # JWT, 1h expiry sb-refresh-token # long-lived, rotated on use domain=.optalocal.com # prod only (omitted localhost) secure=true # HTTPS only sameSite=lax # CSRF protection # Middleware auto-refresh (Next.js) # src/lib/supabase/middleware.ts supabase.auth.getUser() → if token expired: refreshSession() → new cookies set → if no session: redirect to /sign-in
getUser() (validates JWT with Supabase server) not getSession() (reads local cookie only). This prevents session forgery via tampered cookies.
Setting domain=.optalocal.com on the Supabase session cookies means the browser automatically sends them to every *.optalocal.com subdomain. Each app's Next.js middleware validates the session independently with getUser().
# User signs in at accounts.optalocal.com # Cookie set: domain=.optalocal.com # User visits lmx.optalocal.com GET https://lmx.optalocal.com/dashboard Cookie: sb-access-token=eyJ… ← auto-sent by browser # LMX Dashboard middleware supabase.auth.getUser() → { id: "user_…", email: "matt@…" } → Page renders with authenticated session → No second login required # Works for: 1L, 1O, 1R, 1S, 1T, 1U, 1V, 1X
domain attribute is omitted in development (localhost). Each app runs on a different port, so cookies are port-scoped — no cross-app SSO locally. This is expected behavior.
The vault encryption key is derived from a combination of the Supabase user ID and a device fingerprint using HKDF. This means the same user on a different device needs to re-authenticate to derive the correct key — providing per-device isolation even if the Supabase ciphertext is leaked.
# Key derivation (src/accounts/vault.ts) material = userId + ":" + deviceFingerprint key = HKDF-SHA256(material, salt="opta-vault-v1", length=32) # Encryption iv = randomBytes(12) ciphertext = AES-256-GCM(plaintext, key, iv) blob = JSON.stringify({ iv, ciphertext, tag }) # Stored in Supabase as base64 blob # Supabase never sees plaintext # Decryption (on vault.pull) key = HKDF-SHA256(userId + deviceFingerprint) plaintext = AES-256-GCM-decrypt(ciphertext, key, iv, tag)
The vault.pull daemon operation is registered in src/daemon/operations/registry.ts. It calls pullVaultKeys() from src/accounts/vault.ts, which authenticates with Supabase using the stored bearer token, downloads the encrypted blob, decrypts it, and writes each key to the OS keychain via the native keytar module.
# Trigger via daemon operation POST /v3/operations { "op": "vault.pull" } # Response schema (Zod-validated) { "ok": true, "syncedKeys": ["ANTHROPIC_API_KEY", "OPENAI_API_KEY"], "failedKeys": [], "skippedKeys": [] } # Also triggers automatically after: opta account login ← on successful auth callback opta account sync ← explicit manual sync
The vault manages provider credentials centrally. After vault.pull, keys are available to both the CLI (reads from keychain) and the daemon (uses them for provider routing). New providers added via the accounts portal are available on all devices after the next sync.
# Keys managed in vault ANTHROPIC_API_KEY → claude-sonnet-4-6 access OPENAI_API_KEY → gpt-4o fallback LMX_TOKEN → LMX instance auth GITHUB_TOKEN → MCP github server BRAVE_SEARCH_KEY → web search backend # …and user-defined custom keys # OS keychain storage (keytar) keytar.setPassword("opta", "ANTHROPIC_API_KEY", value) keytar.getPassword("opta", "ANTHROPIC_API_KEY") # Config reads keychain first, falls back to env process.env.ANTHROPIC_API_KEY || keychain.get("ANTHROPIC_API_KEY") || config.providers.anthropic.apiKey
When the CLI authenticates and the daemon bridge is enabled, the daemon registers itself as a paired device with the accounts control plane API (/api/pairing). The control plane assigns a deviceId and bridgeToken. These credentials are stored in the bridge state machine.
# Pairing request (from daemon bridge) POST /api/pairing { "userId": "user_01J5…", "deviceName": "Opta48 MacBook", "platform": "darwin", "cliVersion": "0.5.2" } # Response { "deviceId": "dev_01J5…", "bridgeToken": "brdg_…", "sessionId": "sess_01J5…" ← OpenClaw scope } # Bridge state: offline → pairing → connected
offline → pairing → connected, with recovery states degraded (transient errors) and unauthorized (auth failures). Each state maps to the broader ActivationState shown in the desktop UI.
The bridge worker in src/daemon/bridge-worker.ts continuously polls /api/bridge/commands on the control plane. Each command is dispatched to the daemon's operation registry and the result is POSTed back. A unique connectionId detects when credentials change and restarts the polling loop.
# Bridge polling loop GET /api/bridge/commands ?deviceId=dev_01J5… Authorization: Bearer brdg_… → [{ commandId, op, payload }, …] # Dispatch to daemon operation result = await registry.run(op, payload) # Post result back POST /api/bridge/results { commandId, result, deviceId } # Reconnect with new connectionId if bridge state changes
opta-bridge-) and injects it as X-OpenClaw-Agent-ID and as the user field in the request body.
OpenClaw is the Opta identity and request attribution system. When the bridge worker receives a command, it resolves a scope string (from the command's explicit scope, actor field, or session ID), hashes it with SHA-256, and uses the first 24 hex characters as the agent ID. The deterministic design means the same session always produces the same agent ID.
# Agent ID derivation (src/utils/openclaw-scope.ts) scope = command.scope || command.actor || sessionId normalized = scope.trim() hash = SHA-256(normalized) agentId = "opta-bridge-" + hash.slice(0, 24) → "opta-bridge-a3f912bd04e8c721f3a0bc47" # Injected into LMX requests X-Client-ID: opta-bridge-a3f912bd04e8c721f3a0bc47 X-OpenClaw-Agent-ID: opta-bridge-a3f912bd04e8c721f3a0bc47 body.user: "opta-bridge-a3f912bd04e8c721f3a0bc47" # Why deterministic? → Rate limiting by agent across requests → Usage attribution in LMX metrics → Audit trail correlation across system layers
The sanitizeRedirect() function in lib/allowed-redirects.ts validates every next and redirect_to query parameter. Absolute URLs (including deep links like opta-life://) are only allowed if they're in the explicit allowlist. Everything else falls back to /.
# Allowed redirects (allowlist) *.optalocal.com paths ← all subdomains opta-life://… ← iOS deep link opta://… ← Desktop deep link http://localhost:9998/… ← CLI callback # sanitizeRedirect logic function sanitizeRedirect(next: string): string { if (!next) return "/" if (isInAllowlist(next)) return next if (isRelativePath(next)) return next ← safe return "/" ← fallback }
?next=https://evil.com and steal auth codes after OAuth. The allowlist approach is stricter than a same-origin check because it also covers app deep links.
Every Opta Next.js app that needs auth uses the same three-client pattern from @supabase/ssr. This ensures no session is trusted from a client-only cookie read — the middleware always validates with Supabase's server before serving protected pages.
# Three client variants used across all apps # 1. Browser client (Client Components) lib/supabase/client.ts → createBrowserClient() # 2. Server client (Server Components, Route Handlers) lib/supabase/server.ts → createServerClient() + cookies() # 3. Middleware (every request) lib/supabase/middleware.ts → createServerClient() supabase.auth.getUser() ← validates with Supabase server response.cookies.set(…) ← refreshes cookies if needed # Critical: never use getSession() in middleware # getSession() reads cookie without server validation
createServerClient cookie adapter in each app sets domain: '.optalocal.com' only in production. This is the sole mechanism for cross-subdomain SSO — it must be consistent across all apps.