Open Source & Self-Hosted

Autonomous AI
Where You Do

Define a goal. Nova breaks it into subtasks, executes them through a coordinated agent pipeline, evaluates progress, re-plans, and completes — on your hardware, under your control.

8
Services
5
Pipeline Agents
4
Inference Backends
14+
MCP Tools

Self-Directed

Define a goal. Nova breaks it into subtasks, executes autonomously, re-plans as needed.

Self-Improving

Learns your preferences, customizes itself, updates its own configuration over time.

Private & Secure

Runs entirely on your hardware. Your data never leaves. Sandbox tiers control what agents can access.

Parallel By Design

Continuous batching, concurrent pipelines, 4 inference backends. No bottleneck.

Agent Pipeline

Five Agents. One Pipeline.

Every task runs through specialized agents with built-in safety rails. No shortcuts.

Context Agent

Curates relevant code, docs, and task history

Task Agent

Produces the actual output — code, config, or answer

Guardrail Agent

Checks for prompt injection, PII, credential leaks, spec drift

Code Review loops 3x

Pass, needs refactor, or reject — loops back up to 3 times

Decision Agent

Creates decision record and escalates to human on rejection

Post-Pipeline

parallel, non-blocking
DocumentationDiagrammingSecurity ReviewMemory Extraction

Capabilities

Everything You Need

From local inference to cloud routing, from IDE integration to self-configuration.

4 Inference Backends

Ollama, vLLM, llama.cpp, SGLang. Pick the right engine for your workload. Run multiple simultaneously.

RadixAttention Optimization

SGLang caches shared agent system prompts across parallel tasks for significant inference speedup.

Skills & Rules

Extensible prompt templates and declarative behavior constraints without code changes.

Sandbox Tiers

Isolated, nova, workspace, host — execution environments with security-first defaults.

MCP Tool Ecosystem

Plug in any MCP server: GitHub, Slack, Sentry, Playwright, Docker, and more.

Multi-Provider LLM Routing

Anthropic, OpenAI, Ollama, Groq, Gemini, Cerebras, OpenRouter, plus subscription-based Claude/ChatGPT.

GPU-Aware Setup

Auto-detects hardware, recommends backends, supports remote GPU over LAN with Wake-on-LAN.

Recovery & Resilience

Backup/restore, factory reset, service health monitoring via dedicated sidecar service.

IDE Integration

OpenAI-compatible endpoint works with Cursor, Continue.dev, Aider, and any OpenAI-API client.

Self-Configuration

Nova can modify its own settings, prompts, and pod definitions via the nova sandbox tier.

Get Started

Three commands. That's it.

The setup wizard detects your hardware, configures providers, and starts everything.

terminal
~ $ git clone https://github.com/arialabs/nova.git
~ $ cd nova
~/nova $ ./setup
Nova is running at http://localhost:3001

Remote GPU

Got a separate machine with a GPU? Run the remote setup script on it, then re-run ./setup and choose "Remote GPU".

Deployment guide