Quick Start
Prerequisites
Section titled “Prerequisites”- Docker Desktop (includes Docker Compose)
- GNU Make (pre-installed on most Linux/macOS systems; on Windows use WSL or install via
choco install make) - Git
No Python, Node.js, or database installs required — everything runs in containers.
Install
Section titled “Install”git clone https://github.com/arialabs/nova.gitcd nova./setupThe setup wizard handles the rest.
What the setup wizard does
Section titled “What the setup wizard does”- Copies
.env.exampleto.envif it doesn’t exist - Detects GPU availability (NVIDIA / AMD ROCm)
- Asks about your deployment mode (cloud-only, local model serving, remote GPU)
- Configures LLM provider API keys
- Pulls selected Ollama models (if using local inference)
- Starts all services via Docker Compose
When it finishes, open http://localhost:3001 to access the dashboard.
Remote GPU (optional)
Section titled “Remote GPU (optional)”If you have a separate machine with a GPU for AI inference:
# Run this ON the GPU machine:bash <(curl -s https://raw.githubusercontent.com/arialabs/nova/main/scripts/setup-remote-ollama.sh)Then re-run ./setup on the Nova machine and choose “Remote GPU”. The wizard will ask for the GPU machine’s IP address and configure Wake-on-LAN if desired.
Manual configuration
Section titled “Manual configuration”If you prefer to skip the wizard:
cp .env.example .env# Edit .env with your preferred settingsmake devSee Configuration for all available settings.
Verify everything is running
Section titled “Verify everything is running”Check container status:
make psAll 7 core services should show as healthy. Hit the health endpoints to confirm:
| Service | Health endpoint |
|---|---|
| orchestrator | http://localhost:8000/health/live |
| llm-gateway | http://localhost:8001/health/live |
| memory-service | http://localhost:8002/health/live |
| chat-api | http://localhost:8080/health/live |
| dashboard | http://localhost:3001 |
| recovery | http://localhost:8888/health/live |
You can also test the chat interface at http://localhost:8080/ for an interactive demo.
Next steps
Section titled “Next steps”- Architecture — understand how the services fit together
- Configuration — configure providers, models, and routing
- Deployment — production commands, GPU overlays, backups