Skip to content

Quick Start

  • Docker Desktop (includes Docker Compose)
  • GNU Make (pre-installed on most Linux/macOS systems; on Windows use WSL or install via choco install make)
  • Git

No Python, Node.js, or database installs required — everything runs in containers.

Terminal window
git clone https://github.com/arialabs/nova.git
cd nova
./setup

The setup wizard handles the rest.

  1. Copies .env.example to .env if it doesn’t exist
  2. Detects GPU availability (NVIDIA / AMD ROCm)
  3. Asks about your deployment mode (cloud-only, local model serving, remote GPU)
  4. Configures LLM provider API keys
  5. Pulls selected Ollama models (if using local inference)
  6. Starts all services via Docker Compose

When it finishes, open http://localhost:3001 to access the dashboard.

If you have a separate machine with a GPU for AI inference:

Terminal window
# Run this ON the GPU machine:
bash <(curl -s https://raw.githubusercontent.com/arialabs/nova/main/scripts/setup-remote-ollama.sh)

Then re-run ./setup on the Nova machine and choose “Remote GPU”. The wizard will ask for the GPU machine’s IP address and configure Wake-on-LAN if desired.

If you prefer to skip the wizard:

Terminal window
cp .env.example .env
# Edit .env with your preferred settings
make dev

See Configuration for all available settings.

Check container status:

Terminal window
make ps

All 7 core services should show as healthy. Hit the health endpoints to confirm:

ServiceHealth endpoint
orchestratorhttp://localhost:8000/health/live
llm-gatewayhttp://localhost:8001/health/live
memory-servicehttp://localhost:8002/health/live
chat-apihttp://localhost:8080/health/live
dashboardhttp://localhost:3001
recoveryhttp://localhost:8888/health/live

You can also test the chat interface at http://localhost:8080/ for an interactive demo.

  • Architecture — understand how the services fit together
  • Configuration — configure providers, models, and routing
  • Deployment — production commands, GPU overlays, backups