Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nullclaw/nullclaw/llms.txt

Use this file to discover all available pages before exploring further.

NullClaw supports 50+ AI providers through a unified vtable-based interface. This enables seamless model switching without changing your agent configuration.

Provider Interface

All providers implement the Provider vtable interface defined in src/providers/root.zig:
pub const Provider = struct {
    ptr: *anyopaque,
    vtable: *const VTable,

    pub const VTable = struct {
        chatWithSystem: *const fn (...) anyerror![]const u8,
        chat: *const fn (...) anyerror!ChatResponse,
        supportsNativeTools: *const fn (ptr: *anyopaque) bool,
        getName: *const fn (ptr: *anyopaque) []const u8,
        deinit: *const fn (ptr: *anyopaque) void,
        warmup: ?*const fn (ptr: *anyopaque) void = null,
        supports_streaming: ?*const fn (ptr: *anyopaque) bool = null,
        supports_vision: ?*const fn (ptr: *anyopaque) bool = null,
        stream_chat: ?*const fn (...) anyerror!StreamChatResult = null,
    };
};

Core Providers

NullClaw includes dedicated implementations for these major providers:
  • OpenAI — GPT-4o, GPT-5, o1, o3, o4
  • Anthropic — Claude 4, Sonnet, Opus
  • OpenRouter — 200+ model aggregator
  • Ollama — Local LLMs (Llama, Mistral, Qwen)
  • Gemini — Google Gemini 2.0, 1.5 Pro
  • Claude CLI — Reuses ~/.claude/ credentials
  • Codex CLI — GitHub Copilot integration
  • OpenAI Codex — Legacy Codex models

Compatible Providers (41)

These providers use the OpenAI-compatible API format:

Major Cloud Providers

  • Groq, Mistral, DeepSeek, xAI, Cerebras, Perplexity, Cohere

Gateways & Aggregators

  • Venice, Vercel AI Gateway, Together AI, Fireworks AI, Hugging Face
  • AIHubMix, SiliconFlow, Chutes, Synthetic, Poe

China Providers

  • Moonshot (Kimi), GLM (Zhipu), Z.AI, MiniMax, Qwen, Qianfan, Doubao

Infrastructure

  • Amazon Bedrock, Cloudflare AI Gateway, GitHub Copilot, NVIDIA NIM, OVHcloud

Local Servers

  • LM Studio, vLLM, llama.cpp, SGLang, Osaurus, LiteLLM

Provider Selection

Select a provider in ~/.nullclaw/config.json:
{
  "provider": "anthropic",
  "model": "claude-sonnet-4",
  "api_key": "sk-ant-..."
}
For OpenRouter:
{
  "provider": "openrouter",
  "model": "openrouter/anthropic/claude-sonnet-4",
  "api_key": "sk-or-..."
}
For local Ollama:
{
  "provider": "ollama",
  "model": "llama3.2",
  "base_url": "http://localhost:11434"
}

Custom Endpoints

Use custom: prefix for arbitrary OpenAI-compatible endpoints:
{
  "provider": "custom:https://api.example.com/v1",
  "model": "my-model",
  "api_key": "..."
}
Use anthropic-custom: for Anthropic-format endpoints:
{
  "provider": "anthropic-custom:https://proxy.example.com",
  "model": "claude-sonnet-4"
}

Capabilities

CapabilityDescription
supportsNativeTools()Function calling / tool use
supportsStreaming()SSE streaming responses
supportsVision()Image/multimodal input
warmup()Pre-warm TLS connections

Provider-Specific Notes

  • OpenAI: Supports reasoning models (o1, o3, gpt-5) with reasoning_effort parameter
  • Anthropic: OAuth tokens (sk-ant-oat01-) use Bearer auth instead of x-api-key
  • Gemini: Supports API keys, OAuth, and Gemini CLI (~/.gemini/oauth_creds.json)
  • Ollama: No authentication required for local servers
  • OpenRouter: Requires HTTP-Referer and X-Title headers

Next Steps