Skip to content

IDE Integration

Nova exposes an OpenAI-compatible API on the LLM Gateway (http://localhost:8001/v1). Any tool that speaks the OpenAI chat completions protocol works out of the box.

  1. Install the Continue extension
  2. Open the config: Cmd+Shift+P > Continue: Open config.yaml
  3. Add an entry to the models list:
models:
- name: Nova (Claude Sonnet)
provider: openai
model: claude-max/claude-sonnet-4-6
apiBase: http://localhost:8001/v1
roles:
- chat
- edit

apiBase is the only thing that matters — it redirects traffic from api.openai.com to Nova.

Add multiple entries to switch models from the Continue sidebar. Use roles to assign each model to specific tasks:

models:
- name: "Nova: Sonnet (fast)"
provider: openai
model: claude-max/claude-sonnet-4-5
apiBase: http://localhost:8001/v1
roles:
- chat
- edit
- apply
- name: "Nova: Sonnet (latest)"
provider: openai
model: claude-max/claude-sonnet-4-6
apiBase: http://localhost:8001/v1
roles:
- chat
- edit
- apply
- name: "Nova: Opus (most capable)"
provider: openai
model: claude-max/claude-opus-4
apiBase: http://localhost:8001/v1
roles:
- chat
- edit
- name: "Nova: GPT-4o"
provider: openai
model: openai/gpt-4o
apiBase: http://localhost:8001/v1
roles:
- chat
- edit
Terminal window
curl http://localhost:8001/v1/models | jq '.data[].id'

Returns all registered model IDs.

If REQUIRE_AUTH=true, create a key first:

Terminal window
curl -X POST http://localhost:8000/api/v1/keys \
-H "X-Admin-Secret: your-admin-secret" \
-H "Content-Type: application/json" \
-d '{"name": "continue-dev", "rate_limit_rpm": 120}'

Then add apiKey to your model entries:

models:
- name: Nova (Claude Sonnet)
provider: openai
model: claude-max/claude-sonnet-4-6
apiBase: http://localhost:8001/v1
apiKey: sk-nova-your-key-here
roles:
- chat
- edit

Same approach — Cursor supports custom OpenAI-compatible endpoints:

  1. Settings > Models > Add model
  2. Set Base URL to http://localhost:8001/v1
  3. Set API Key to any placeholder
  4. Use any Nova model ID as the model name
Terminal window
aider \
--openai-api-base http://localhost:8001/v1 \
--openai-api-key unused \
--model claude-max/claude-sonnet-4-6
Terminal window
curl http://localhost:8001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-max/claude-sonnet-4-6",
"messages": [{"role": "user", "content": "Hello from Nova"}]
}'
IDE / tool
| POST /v1/chat/completions (OpenAI format)
v
LLM Gateway :8001
| translates OpenAI -> Nova internal format
| forwards to registered provider (Anthropic, OpenAI, Ollama, ...)
v
Provider API
| response
v
LLM Gateway
| translates provider response -> OpenAI format
v
IDE / tool <-- looks identical to talking directly to OpenAI

The translation lives in llm-gateway/app/openai_compat.py.