Updates

Changelog

What's new in Nova.

Inference modes — bundled Ollama by default

Nova now asks how you’d like to use it on first run, and ships sensible defaults for each option:

  • hybrid (default) — bundles Ollama for local AI, falls back to cloud providers when needed. Best of both worlds. First-run downloads ~5.4 GB of starter models (qwen2.5:1.5b, qwen2.5:7b, nomic-embed-text).
  • local-only — bundles Ollama, never uses cloud. Privacy-first or offline-friendly. Same ~5.4 GB starter models.
  • cloud-only — does not bundle Ollama at all (no image pull, no container, no model downloads). Lightest footprint, requires cloud API keys.

After install, switching modes (and pointing Nova at an external Ollama / vLLM instance like http://192.168.x.y:11434) lives in the dashboard — Settings → AI & Models, no scripts. The first-install prompt is just the bootstrap step before the dashboard is running.

Under the hood: a single user-facing NOVA_INFERENCE_MODE knob now derives both COMPOSE_PROFILES (whether the bundled ollama Compose service ships and starts) and LLM_ROUTING_STRATEGY (how the gateway picks providers). The setup.sh wizard writes both to .env idempotently, preserving any unrelated profiles you already have set.

Heads-up for existing installs: if your .env previously had OLLAMA_BASE_URL=auto or OLLAMA_BASE_URL=host and you depended on the gateway probing your host’s Ollama instance (Windows/macOS native install, remote LAN box, etc.), those values now resolve to the bundled service (http://ollama:11434) instead. The brittle subprocess-based probe (and scripts/resolve-ollama-url.sh) is gone. To keep using your host’s Ollama, set OLLAMA_BASE_URL to a literal URL — e.g. OLLAMA_BASE_URL=http://host.docker.internal:11434 for a same-host install, or OLLAMA_BASE_URL=http://192.168.x.y:11434 for a LAN box.

Security hardening — admin secret, OAuth state, recovery guards

Five interlocking security gaps closed in one chain:

  • Admin secret no longer impersonates users. The X-Admin-Secret header used to silently authenticate user-context endpoints — every fresh install was effectively “log in as anyone” by header. Now UserDep is JWT-only; admin secret authenticates admin endpoints only. Worker services (intel-worker, knowledge-worker) were unaffected because they already hit admin endpoints exclusively.
  • Random admin secret on install. scripts/install.sh generates a 32-byte URL-safe NOVA_ADMIN_SECRET when .env contains the literal default, an empty value, or no value at all. The generated secret is displayed once at the end of install with a “save this to your password manager” prompt. Orchestrator, recovery, and cortex now refuse to start with the literal default unless NOVA_ALLOW_DEFAULT_ADMIN_SECRET=1 is set (escape hatch for tests/dev).
  • Recovery service hardened. CORS allowlist replaces wildcard. Critical services (postgres, redis, recovery itself) cannot be restarted via the API — operators must use docker compose restart from the host where the consequences are obvious. Container matching switched from substring (if name in c.name) to exact com.docker.compose.service label, so restart_service("post") no longer accidentally matches postgres.
  • Cortex CORS allowlist. Same wildcard → settings-driven allowlist fix as recovery; cortex now matches the orchestrator pattern.
  • OAuth CSRF protection. Google OAuth flow now requires a state parameter. /api/v1/auth/google mints 32 bytes of URL-safe random state, stores it in Redis with a 10-minute TTL, and returns it alongside the consent URL. /api/v1/auth/google/callback validates state via GETDEL (single-use) and uses the Redis-stored redirect_uri rather than any client-supplied value — closing both CSRF and redirect_uri-tampering vectors. PKCE is a follow-up.

If you’re upgrading: rerun ./scripts/install.sh to regenerate your admin secret (or set a strong NOVA_ADMIN_SECRET manually). The Redis runtime override at nova:config:auth.admin_secret still wins at runtime — clear it with docker compose exec redis redis-cli -n 1 DEL nova:config:auth.admin_secret if it has drifted from .env.

A few intentionally-deferred items get their own future work: a built-in secret manager (which will absorb a “rotate from dashboard” feature), service-account JWTs to replace X-Admin-Secret for internal worker calls, OAuth PKCE, and retiring the admin secret entirely in favor of an is_admin JWT claim. The current chain shrinks blast radius enough that those become value-adds, not emergencies.

./uninstall — clean removal in one command

Nova now ships with a top-level ./uninstall script that removes everything Nova installed on the machine: containers, networks, named volumes, locally-built Docker images, the bind-mounted data/ directory, backups/, .env and its backups, build artifacts (dist/, node_modules/, __pycache__, .pytest_cache), and the agent workspace at ~/.nova/workspace/.

The flow is preview-first: ./uninstall always shows you exactly what will be deleted, broken down by category with disk-size totals, before asking for an explicit uninstall confirmation. Pass --dry-run to see the preview without any destruction, or --yes to skip the confirmation in scripted contexts.

The cloned repo source itself is left intact (delete it manually with cd .. && rm -rf nova if you want), and shared upstream Docker images (ollama/ollama, pgvector/pgvector, redis, etc.) are NOT touched — they may be in use by other Docker projects on the same machine. Use docker image prune -a separately if you want a deeper sweep.

For routine “reset Nova to a clean slate while keeping it installed,” use the in-app factory reset under Settings → System. ./uninstall is for “Nova is leaving this machine.”

Also in this release: the top-level setup script and scripts/setup.sh have been renamed to install and scripts/install.sh to match the conventional install/uninstall pairing. The make setup target is now make install.

One-line install + --help everywhere

Two small DX improvements that make Nova nicer to start and nicer to live with.

scripts/bootstrap.sh + curl-pipe-bash one-liner. New users can now install Nova with a single command:

Terminal window
curl -fsSL https://raw.githubusercontent.com/jeremyspofford/nova/main/scripts/bootstrap.sh | bash

The bootstrap script checks for git and docker, clones the repo (defaults to ./nova, override with NOVA_DIR=<path>), and re-attaches the install wizard to the controlling terminal so you land directly in the mode-selection prompt. Pass --no-install to clone without launching the wizard, or just keep using the git clone && cd nova && ./install flow if you’d rather audit the source first — both routes hit the same wizard.

--help on every script. Every user-facing script (./install, ./uninstall, scripts/install.sh, scripts/backup.sh, scripts/restore.sh, scripts/setup-remote-ollama.sh, scripts/detect_hardware.sh, scripts/bootstrap.sh) now responds to --help / -h with a clear usage block — what the script does, how to invoke it, what flags and environment variables it accepts. No more “what does this script even do” — just append --help.

Voice Chat & Conversation Mode

  • Voice service (voice-service, port 8130) — STT via OpenAI Whisper, TTS via OpenAI TTS with provider abstraction for Deepgram and ElevenLabs
  • Conversation mode on Brain page — Gemini-style hands-free voice loop with auto-listen, silence detection, and barge-in interruption
  • Barge-in support — start talking while Nova is speaking to interrupt her mid-sentence; warm mic pattern eliminates turn-transition latency
  • Live transcription — real-time display of words as you speak (Web Speech API for live display, Whisper for final accuracy)
  • Audio level visualization — animated bars respond to mic input volume in real-time
  • Mute now immediately stops TTS playback mid-sentence (previously only prevented new sentences)
  • Model selector synced between Chat and Brain pages — change model from either and it stays in sync
  • Configurable voice settings: silence timeout (500ms-10s) and barge-in threshold (0.05-0.50) in Dashboard Settings
  • Voice service is optional — enable with --profile voice, dashboard auto-hides voice UI when unavailable

Managed Inference Backends

  • Added managed inference backend lifecycle — select vLLM or Ollama from the dashboard, Nova handles container start/stop/health
  • Hardware detection: auto-detects GPU vendor, VRAM, Docker GPU runtime at setup and runtime
  • New LocalInferenceProvider in LLM gateway wraps the active backend with 5-second config cache
  • Backend switching with drain protocol: zero dropped requests when switching between backends
  • New Settings section: Local Inference (AI & Models tab) with backend selector, live status, hardware info
  • Recovery service now manages inference containers via Docker Compose profiles with health monitoring (30s interval, exponential backoff restart)
  • vLLM model discovery via /v1/models endpoint
  • In-flight request tracking (GET /health/inflight) enables graceful drain during backend switches
  • Redis db7 allocated for recovery service (nova:system:* namespace for system facts)

Remote Access & Navigation

  • Added Remote Access page with Cloudflare Tunnel and Tailscale setup wizards
  • Updated NavBar with improved navigation structure
  • Expanded MCP server catalog with new entries

Shared UI & Mobile

  • Introduced shared UI primitives for consistent component design
  • Fixed mobile layout overflow across all dashboard pages
  • CSS deduplication pass

Recovery Service

  • Added recovery sidecar service for backup/restore and factory reset
  • Database backup/restore via CLI and web UI
  • Service management with Docker SDK integration
  • Startup screen showing services coming online

Platform Hardening

  • Fixed MCP tools invisible to agents
  • Added test foundation with pytest fixtures
  • Fixed streaming token counts for subscription providers
  • Fixed reaper race condition with Redis dedup gate
  • Structured JSON logging across all services
  • Embedding cache activation (3-tier: Redis, PostgreSQL, LLM Gateway)
  • Working memory cleanup background job

Dashboard MVP

  • Overview page with live agent cards
  • Usage analytics with monthly/weekly/daily charts
  • API key management with one-time reveal
  • Models page showing 39 models grouped by provider
  • Dark mode and theme system
  • Settings page with provider status, context budgets, and admin controls