Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nullclaw/nullclaw/llms.txt

Use this file to discover all available pages before exploring further.

OpenRouter is an AI model aggregator that provides unified access to 200+ models from OpenAI, Anthropic, Google, Meta, Mistral, and many others.

Configuration

provider
string
required
Set to "openrouter"
api_key
string
required
OpenRouter API key. Get yours at openrouter.ai/keys
model
string
required
Fully qualified model name in format provider/organization/model-name
temperature
number
Sampling temperature (0.0-2.0). Defaults to 0.7.
max_tokens
number
Maximum output tokens.

Example Configuration

{
  "provider": "openrouter",
  "model": "openrouter/anthropic/claude-sonnet-4",
  "api_key": "sk-or-v1-...",
  "temperature": 0.7
}

Model Selection

OpenRouter uses fully qualified model names:
  • openrouter/anthropic/claude-sonnet-4
  • openrouter/openai/gpt-4o
  • openrouter/google/gemini-2.0-flash
  • openrouter/meta-llama/llama-3.2-90b-vision
  • openrouter/mistralai/mistral-large-2411
Browse available models at openrouter.ai/models

Authentication

OpenRouter uses Bearer token authentication with additional headers:
Authorization: Bearer sk-or-v1-...
HTTP-Referer: https://github.com/nullclaw/nullclaw
X-Title: nullclaw
The HTTP-Referer and X-Title headers are automatically added by the provider.

Capabilities

FeatureSupport
StreamingYes
Function CallingYes (model-dependent)
Vision (images)Yes (model-dependent)
System MessagesYes
Tool CallsYes

Reasoning Models

OpenRouter supports OpenAI reasoning models (o1, o3, gpt-5):
{
  "provider": "openrouter",
  "model": "openrouter/openai/o3-mini",
  "api_key": "sk-or-v1-...",
  "max_tokens": 100,
  "reasoning_effort": "medium"
}
  • Temperature is automatically omitted for reasoning models
  • Uses max_completion_tokens field
  • Supports reasoning_effort parameter

Warmup

OpenRouter provider includes a warmup feature to pre-establish TLS connections:
pub fn warmup(self: *OpenRouterProvider) void {
    const api_key = self.api_key orelse return;
    // Hit auth endpoint to warm up connection
    const resp = curlGet(self.allocator, WARMUP_URL, auth_hdr) catch return;
    self.allocator.free(resp);
}
The warmup hits https://openrouter.ai/api/v1/auth/key to establish a connection.

Code Example

From src/providers/openrouter.zig:
pub const OpenRouterProvider = struct {
    api_key: ?[]const u8,
    allocator: std.mem.Allocator,

    const BASE_URL = "https://openrouter.ai/api/v1/chat/completions";
    const WARMUP_URL = "https://openrouter.ai/api/v1/auth/key";
    const REFERER = "https://github.com/nullclaw/nullclaw";
    const TITLE = "nullclaw";

    pub fn init(allocator: std.mem.Allocator, api_key: ?[]const u8) OpenRouterProvider {
        return .{
            .api_key = api_key,
            .allocator = allocator,
        };
    }
};

Multi-Turn Conversations

OpenRouter supports full conversation history:
{
  "model": "openrouter/anthropic/claude-sonnet-4",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi! How can I help?"},
    {"role": "user", "content": "What's 2+2?"}
  ]
}

Error Handling

The provider classifies common OpenRouter API errors:
  • error.RateLimited — 429 rate limit exceeded
  • error.ContextLengthExceeded — Context window too large
  • error.InvalidApiKey — Authentication failed
  • error.ApiError — Generic API error
  • error.NoResponseContent — Empty response

Pricing

OpenRouter charges based on the model you use. Pricing varies by model: