IDE Integration
Nova exposes an OpenAI-compatible API on the LLM Gateway (http://localhost:8001/v1). Any tool that speaks the OpenAI chat completions protocol works out of the box.
Continue.dev (VS Code / JetBrains)
Section titled “Continue.dev (VS Code / JetBrains)”Quick start
Section titled “Quick start”- Install the Continue extension
- Open the config: Cmd+Shift+P >
Continue: Open config.yaml - Add an entry to the
modelslist:
models: - name: Nova (Claude Sonnet) provider: openai model: claude-max/claude-sonnet-4-6 apiBase: http://localhost:8001/v1 roles: - chat - editapiBase is the only thing that matters — it redirects traffic from api.openai.com to Nova.
Recommended model set
Section titled “Recommended model set”Add multiple entries to switch models from the Continue sidebar. Use roles to assign each model to specific tasks:
models: - name: "Nova: Sonnet (fast)" provider: openai model: claude-max/claude-sonnet-4-5 apiBase: http://localhost:8001/v1 roles: - chat - edit - apply - name: "Nova: Sonnet (latest)" provider: openai model: claude-max/claude-sonnet-4-6 apiBase: http://localhost:8001/v1 roles: - chat - edit - apply - name: "Nova: Opus (most capable)" provider: openai model: claude-max/claude-opus-4 apiBase: http://localhost:8001/v1 roles: - chat - edit - name: "Nova: GPT-4o" provider: openai model: openai/gpt-4o apiBase: http://localhost:8001/v1 roles: - chat - editVerify available models
Section titled “Verify available models”curl http://localhost:8001/v1/models | jq '.data[].id'Returns all registered model IDs.
With API key auth enabled
Section titled “With API key auth enabled”If REQUIRE_AUTH=true, create a key first:
curl -X POST http://localhost:8000/api/v1/keys \ -H "X-Admin-Secret: your-admin-secret" \ -H "Content-Type: application/json" \ -d '{"name": "continue-dev", "rate_limit_rpm": 120}'Then add apiKey to your model entries:
models: - name: Nova (Claude Sonnet) provider: openai model: claude-max/claude-sonnet-4-6 apiBase: http://localhost:8001/v1 apiKey: sk-nova-your-key-here roles: - chat - editCursor
Section titled “Cursor”Same approach — Cursor supports custom OpenAI-compatible endpoints:
- Settings > Models > Add model
- Set Base URL to
http://localhost:8001/v1 - Set API Key to any placeholder
- Use any Nova model ID as the model name
Aider (terminal)
Section titled “Aider (terminal)”aider \ --openai-api-base http://localhost:8001/v1 \ --openai-api-key unused \ --model claude-max/claude-sonnet-4-6Raw API (curl / scripts)
Section titled “Raw API (curl / scripts)”curl http://localhost:8001/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "claude-max/claude-sonnet-4-6", "messages": [{"role": "user", "content": "Hello from Nova"}] }'How it works
Section titled “How it works”IDE / tool | POST /v1/chat/completions (OpenAI format) vLLM Gateway :8001 | translates OpenAI -> Nova internal format | forwards to registered provider (Anthropic, OpenAI, Ollama, ...) vProvider API | response vLLM Gateway | translates provider response -> OpenAI format vIDE / tool <-- looks identical to talking directly to OpenAIThe translation lives in llm-gateway/app/openai_compat.py.