Back to Engineering Portfolio
2026

Mission Control

A voice-powered desktop agent that manages multiple development projects through natural language, with ambient health monitoring and per-project AI subagents.

ElectronClaude Agent SDKVue 3MCPSQLitenode-pty

The Problem

I manage several active projects simultaneously — a Shopify Plus store, an affiliate dashboard, landing pages, side projects. Each has its own deployment target, test suite, integration stack, and failure modes. The daily ritual is the same: open terminal, check git status, check Vercel, check Shopify admin, check Klaviyo, check GoAffPro. Repeat for each project. It's 20-30 minutes of context-switching across browser tabs and terminals before actual work begins.

The question: what if a single voice command — "how's everything doing?" — could check all projects in parallel through AI agents that understand each project's stack, and proactively alert me when something breaks?


System Design

The application is an Electron desktop app with a Vue 3 renderer and a multi-layer agent system running in the main process.

The architecture has four distinct layers that communicate through Electron IPC:

Voice layer — a VoiceBridge class that manages the STT/TTS lifecycle. It supports three TTS providers (local Kokoro, OpenAI, ElevenLabs) and local Whisper for STT, all configurable per session. Voice transcriptions flow into the orchestrator; agent responses flow back to TTS with per-project voice IDs, so different projects can have distinct voices in ambient mode. Audio data is streamed to the renderer as base64 for Web Audio API playback.

Orchestration layer — an intent parser that takes voice or text input, classifies it into one of 11 intent types (status_all, status_project, run_tests, deploy_check, fix_issue, metrics, ambient_on/off, git_log, terminal_run, custom), matches it to a project via alias lookup against the SQLite project registry, and routes to the appropriate handler. The parser uses regex pattern matching rather than an LLM for intent classification — a deliberate choice because classification needs to be instant (sub-10ms) and deterministic. The LLM's reasoning power is reserved for the subagent execution where it actually matters.

Agent runner layer — wraps the Claude Agent SDK's query() function to execute per-project subagents. Each subagent runs with cwd set to the project's directory, its own system prompt, a configurable set of MCP servers, and a budget cap ($1 per session by default). The runner streams tool usage activity back to the renderer via IPC for real-time feedback in the activity feed. It also manages interactive sessions through ClaudeSDKClient for multi-turn debugging conversations that maintain context across messages.

Ambient monitoring layer — an AmbientMonitor class that runs periodic health checks on all projects at a configurable interval (default 5 minutes). Each check dispatches a subagent with a constrained prompt that asks for structured JSON alerts. The monitor classifies alert severity, filters below a configurable threshold, and pushes new alerts to the renderer. In ambient mode, critical alerts are automatically spoken via the voice bridge — so if a deploy fails while I'm working on something else, the app tells me.


The MCP Server Architecture

Each project can declare which MCP servers it needs. The runner maintains a registry of available servers and attaches the relevant ones to each subagent at execution time. I built three custom MCP servers for the integrations I use daily:

Shopify MCP — wraps the Admin GraphQL API. Exposes tools for orders, products, inventory, subscriptions, fulfillment, and metafields. The agent can ask "how many orders came in today" and the MCP server translates that into a GraphQL query against the store.

Klaviyo MCP — wraps the Klaviyo REST API v3. Exposes flows, campaigns, lists, metrics, and event tracking. The agent can check if an email flow is failing or pull campaign performance.

GoAffPro MCP — wraps the GoAffPro affiliate API. Exposes affiliate data, commission tracking, payouts, and referral stats. The agent can report on pending commissions or top-performing affiliates.

The key design choice: MCP servers are project-scoped, not global. The Nomo store project gets Shopify + Klaviyo + GoAffPro. The affiliate dashboard project gets only GoAffPro. Side projects get none. This prevents tool confusion — a subagent working on a landing page project can't accidentally query Shopify data. The project config in SQLite declares which servers each project uses, and the runner only attaches those.

Third-party MCP servers (Vercel, GitHub) are registered the same way but use their official npx packages rather than custom implementations.


Data Persistence

Projects are stored in SQLite via better-sqlite3 in WAL mode with foreign keys. The schema is two tables: projects (name, slug, path, aliases as JSON, MCP servers as JSON, voice ID, health checks, description) and setup_tasks (per-project shell commands with status tracking: pending/running/done/failed).

On first launch, the database seeds with default projects. Users add new projects via a folder browser dialog that auto-detects the project type, package manager, and frameworks by scanning for config files (package.json, nuxt.config.ts, next.config.js, shopify.app.toml, etc.). This detection feeds into the project's system prompt so the subagent knows what stack it's working with.

The choice to use SQLite over a JSON file or in-memory store is about reliability across Electron's process lifecycle. SQLite survives crashes, supports concurrent reads from multiple IPC handlers, and gives me migration paths as the schema evolves.


The Mock/Live Toggle

The system has a feature flag (USE_MOCK) that switches between mock responses and real Agent SDK execution. In mock mode, every project has pre-written responses for each intent type that simulate realistic output — complete with specific numbers, timestamps, and project-appropriate details.

This exists because the Agent SDK requires authentication and API credits. During UI development, I needed the full interaction flow — voice input, intent parsing, project routing, response rendering, TTS output — without burning tokens. The mock responses preserve the exact data shape and timing characteristics of real agent responses, so the renderer doesn't know the difference.

This is also what makes the project demo-able without credentials. Anyone can clone the repo, run npm run dev, and interact with the full UI immediately.


Embedded Terminals

Each project has a dedicated PTY session managed through node-pty, rendered in the Vue frontend via xterm.js. The orchestrator can route terminal commands directly — "run npm test in nomo terminal" extracts the command, ensures a PTY exists for that project, and writes to it. Terminal output is available as context for agent prompts, so when you ask "what's failing in the tests," the agent can reference the actual terminal output rather than re-running the suite.

The terminal manager is a singleton that handles PTY lifecycle — creation, data buffering (last 2KB of output for context), resize events, and cleanup on project removal.


What I'd Do Differently

The intent parser uses regex, which works for the ~15 command patterns I use daily but doesn't generalize. A production version would use a small, fast classifier (or even Haiku) for intent classification, falling back to the regex parser when the classifier is unavailable.

The ambient monitor's health check prompt asks the agent to return structured JSON, but JSON parsing from LLM output is inherently fragile. I added a fallback that scans for error keywords in unstructured output, but a more robust approach would use the SDK's structured output feature or a schema-enforced response format.

The voice bridge currently has the TTS provider implementations inline. In production, I'd extract these into a provider interface with proper error handling, retry logic, and fallback chains (Kokoro fails → fall back to OpenAI → fall back to system say).


Numbers

MetricValue
Application code~9,700 lines TypeScript + Vue
Custom MCP servers3 (Shopify, Klaviyo, GoAffPro)
Intent types11 routable intents
TTS providers3 (Kokoro, OpenAI, ElevenLabs)
PersistenceSQLite (2 tables, WAL mode)
IPC channels5 domains (agent, voice, project, tasks, terminal)

Stack

Electron 33 · Vue 3 (Composition API) · Pinia · vue-router · Claude Agent SDK · VoiceMode MCP · MCP SDK · better-sqlite3 · node-pty · xterm.js · Vite 6 · SCSS · TypeScript