Skip to content

Configuration

Nova is configured primarily through a .env file and a models.yaml file. Run ./setup for interactive configuration, or edit these files manually.

Copy .env.example to .env and edit:

Terminal window
cp .env.example .env
VariableDescriptionDefault
POSTGRES_PASSWORDDatabase password (required)(empty — set during setup)
NOVA_ADMIN_SECRETSecret for admin API access via X-Admin-Secret headernova-admin-secret-change-me
LOG_LEVELLogging verbosityINFO
REQUIRE_AUTHRequire API key for all requestsfalse
SHELL_TIMEOUT_SECONDSTimeout for shell command execution30
VariableDescriptionDefault
COMPOSE_PROFILESComma-separated Docker Compose profiles (e.g., local-ollama, local-vllm)(empty)
OLLAMA_BASE_URLURL for remote Ollama instance(empty)
LLM_ROUTING_STRATEGYModel routing strategy (see below)local-first
DEFAULT_CHAT_MODELDefault model for chat interactionsllama3.2
VariableDescription
WOL_MAC_ADDRESSMAC address of the remote GPU machine
WOL_BROADCAST_IPBroadcast IP for Wake-on-LAN packets
VariableDescription
CORS_ALLOWED_ORIGINSComma-separated origins (default covers local dev ports)

Nova supports many LLM providers. Configure the ones you want to use:

Subscription providers (use your existing subscription)

Section titled “Subscription providers (use your existing subscription)”
VariableProviderSetup
CLAUDE_CODE_OAUTH_TOKENClaude Max/ProRun: claude auth login && claude setup-token
CHATGPT_ACCESS_TOKENChatGPT Plus/ProRun: codex login

Free tier providers (no credit card required)

Section titled “Free tier providers (no credit card required)”
VariableProviderSign up
GROQ_API_KEYGroqconsole.groq.com
CEREBRAS_API_KEYCerebrascloud.cerebras.ai
GEMINI_API_KEYGeminiaistudio.google.com
OPENROUTER_API_KEYOpenRouteropenrouter.ai
GITHUB_TOKENGitHub Modelsgithub.com/settings/tokens
VariableProviderSign up
ANTHROPIC_API_KEYAnthropicconsole.anthropic.com
OPENAI_API_KEYOpenAIplatform.openai.com

Override the default model for each provider:

VariableExample
DEFAULT_OLLAMA_MODELllama3.2
DEFAULT_GROQ_MODELgroq/llama-3.3-70b-versatile
DEFAULT_GEMINI_MODELgemini/gemini-2.5-flash
DEFAULT_CEREBRAS_MODELcerebras/llama-3.3-70b
DEFAULT_CLAUDE_MAX_MODELclaude-max/claude-sonnet-4-6
DEFAULT_CHATGPT_MODELchatgpt/gpt-4o
DEFAULT_OPENROUTER_MODELopenrouter/meta-llama/llama-3.1-8b-instruct:free
DEFAULT_GITHUB_MODELgithub/gpt-4o-mini

Nova supports two options for accessing the dashboard remotely:

OptionVariableDescription
Cloudflare TunnelCLOUDFLARE_TUNNEL_TOKENBrowser access from anywhere with automatic HTTPS. Add cloudflare-tunnel to COMPOSE_PROFILES.
TailscaleTAILSCALE_AUTHKEYFully private VPN mesh with encrypted WireGuard tunnel. Add tailscale to COMPOSE_PROFILES.

The models.yaml file defines which Ollama models to auto-pull on startup when running with a local Ollama instance. Edit this file to control which models are available locally.

The LLM_ROUTING_STRATEGY variable controls how Nova selects between local and cloud providers:

StrategyBehavior
local-onlyOnly use local inference backends (Ollama, vLLM, SGLang, llama.cpp). Fails if no local backend is available.
local-firstTry local backends first, fall back to cloud providers if unavailable.
cloud-onlyOnly use cloud API providers. No local inference.
cloud-firstTry cloud providers first, fall back to local backends if unavailable.

This setting is runtime-configurable from the dashboard Settings page.

The orchestrator allocates context window space across different purposes to prevent any single source from consuming the entire context:

CategoryBudgetPurpose
System10%System prompts and agent instructions
Tools15%MCP tool definitions and schemas
Memory40%Retrieved memories and semantic context
History20%Conversation history
Working15%Current task working space

These budgets ensure that long conversation histories or large memory retrievals don’t crowd out tool definitions or system prompts.