Skip to content

Configuration

Nova is configured primarily through a .env file and a models.yaml file. Run ./install for interactive configuration, or edit these files manually.

Copy .env.example to .env and edit:

Terminal window
cp .env.example .env
VariableDescriptionDefault
POSTGRES_PASSWORDDatabase password (required)(empty — set during setup)
NOVA_ADMIN_SECRETSecret for admin API access via X-Admin-Secret headernova-admin-secret-change-me
LOG_LEVELLogging verbosityINFO
REQUIRE_AUTHRequire API key for all requestsfalse
SHELL_TIMEOUT_SECONDSTimeout for shell command execution30
VariableDescriptionDefault
COMPOSE_PROFILESComma-separated Docker Compose profiles (e.g., local-ollama, local-vllm)(empty)
OLLAMA_BASE_URLURL for remote Ollama instance(empty)
LLM_ROUTING_STRATEGYModel routing strategy (see below)local-first
DEFAULT_CHAT_MODELDefault model for chat interactionsllama3.2

The active inference backend is configured from the dashboard Settings page (AI & Models — Local Inference). These settings are stored in Redis (nova:config:inference.*) and take effect immediately — no restart required.

KeyDescriptionDefault
inference.backendActive backend: ollama, vllm, or noneollama
inference.stateBackend state: ready, draining, starting, errorready
inference.urlOverride URL for the backend (auto-detected from Docker service name if empty)(empty)

The setup script runs hardware detection and writes results to data/hardware.json. The recovery service syncs this to Redis on startup. The dashboard shows the detected hardware and recommends a backend based on GPU availability.

VariableDescription
WOL_MAC_ADDRESSMAC address of the remote GPU machine
WOL_BROADCAST_IPBroadcast IP for Wake-on-LAN packets
VariableDescription
CORS_ALLOWED_ORIGINSComma-separated origins (default covers local dev ports)

Nova supports many LLM providers. Configure the ones you want to use:

Subscription providers (use your existing subscription)

Section titled “Subscription providers (use your existing subscription)”
VariableProviderSetup
CLAUDE_CODE_OAUTH_TOKENClaude Max/ProRun: claude auth login && claude setup-token
CHATGPT_ACCESS_TOKENChatGPT Plus/ProRun: codex login

Free tier providers (no credit card required)

Section titled “Free tier providers (no credit card required)”
VariableProviderSign up
GROQ_API_KEYGroqconsole.groq.com
CEREBRAS_API_KEYCerebrascloud.cerebras.ai
GEMINI_API_KEYGeminiaistudio.google.com
OPENROUTER_API_KEYOpenRouteropenrouter.ai
GITHUB_TOKENGitHub Modelsgithub.com/settings/tokens
VariableProviderSign up
ANTHROPIC_API_KEYAnthropicconsole.anthropic.com
OPENAI_API_KEYOpenAIplatform.openai.com

Override the default model for each provider:

VariableExample
DEFAULT_OLLAMA_MODELllama3.2
DEFAULT_GROQ_MODELgroq/llama-3.3-70b-versatile
DEFAULT_GEMINI_MODELgemini/gemini-2.5-flash
DEFAULT_CEREBRAS_MODELcerebras/llama-3.3-70b
DEFAULT_CLAUDE_MAX_MODELclaude-max/claude-sonnet-4-6
DEFAULT_CHATGPT_MODELchatgpt/gpt-4o
DEFAULT_OPENROUTER_MODELopenrouter/meta-llama/llama-3.1-8b-instruct:free
DEFAULT_GITHUB_MODELgithub/gpt-4o-mini

The voice service is optional. Enable it with docker compose --profile voice up.

VariableDescriptionDefault
STT_PROVIDERSpeech-to-text provider (openai, deepgram)openai
TTS_PROVIDERText-to-speech provider (openai, elevenlabs)openai
TTS_VOICEDefault TTS voicenova
TTS_MODELTTS quality (tts-1 fast, tts-1-hd quality)tts-1
DEEPGRAM_API_KEYAPI key for Deepgram STT(optional)
ELEVENLABS_API_KEYAPI key for ElevenLabs TTS(optional)

Voice uses the same OPENAI_API_KEY as the LLM provider section. All voice settings are also runtime-configurable from Dashboard > Settings > Voice.

Nova supports two options for accessing the dashboard remotely:

OptionVariableDescription
Cloudflare TunnelCLOUDFLARE_TUNNEL_TOKENBrowser access from anywhere with automatic HTTPS. Add cloudflare-tunnel to COMPOSE_PROFILES.
TailscaleTAILSCALE_AUTHKEYFully private VPN mesh with encrypted WireGuard tunnel. Add tailscale to COMPOSE_PROFILES.

The models.yaml file defines which Ollama models to auto-pull on startup when running with a local Ollama instance. Edit this file to control which models are available locally.

The LLM_ROUTING_STRATEGY variable controls how Nova selects between local and cloud providers:

StrategyBehavior
local-onlyOnly use the active local inference backend. Fail if none is available.
local-firstTry the local backend first, fall back to cloud providers.
cloud-onlyOnly use cloud API providers. Skip local inference.
cloud-firstTry cloud first, fall back to the local backend.

This setting is runtime-configurable from the dashboard Settings page.

These settings are managed from the dashboard Settings page (Nova Identity section) and stored in the platform_config table. They control how the AI presents itself.

KeyDescriptionDefault
nova.nameDisplay name used in the system prompt, toolbar, and chat UINova
nova.personaPersonality guidelines injected into the system prompt’s ## Identity block. Defines communication style, tone, and character.(empty)
nova.greetingOpening message shown in the Chat page before the user types. Supports {name} placeholder which auto-resolves to the current name.Hello! I'm {name}. I have access to your workspace...

Changes take effect immediately — no restart required. The AI’s system prompt is assembled dynamically:

  1. Identity — name and persona from nova.name + nova.persona
  2. Platform Context — model, tools, active agents
  3. Response Style — formatting rules
  4. Memories — relevant context from previous conversations

The orchestrator allocates context window space across different purposes to prevent any single source from consuming the entire context:

CategoryBudgetPurpose
System10%System prompts and agent instructions
Tools15%MCP tool definitions and schemas
Memory40%Retrieved memories and semantic context
History20%Conversation history
Working15%Current task working space

These budgets ensure that long conversation histories or large memory retrievals don’t crowd out tool definitions or system prompts.