Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nullclaw/nullclaw/llms.txt

Use this file to discover all available pages before exploring further.

NullClaw supports 50+ AI providers through a unified configuration interface. Configure provider credentials, default models, and temperature settings.

Basic Model Configuration

Provider Setup

Configure providers in the models.providers section:
{
  "models": {
    "providers": {
      "openrouter": {
        "api_key": "YOUR_OPENROUTER_API_KEY"
      },
      "openai": {
        "api_key": "YOUR_OPENAI_API_KEY"
      }
    }
  }
}
models.providers
object
Map of provider name to provider configuration. Each key is a provider identifier (e.g., openrouter, openai, anthropic).
models.providers.<name>.api_key
string
required
API key for the provider. Can also be set via environment variables like OPENROUTER_API_KEY.
models.providers.<name>.base_url
string
Optional custom base URL for API requests. Useful for proxies or compatible providers.
models.providers.<name>.native_tools
boolean
default:"true"
Whether the provider supports native OpenAI-style tool calls. Set to false to use XML tool format via system prompt.

Default Model Selection

Configure the default model used by agents:
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "openrouter/anthropic/claude-sonnet-4"
      }
    }
  },
  "default_provider": "openrouter"
}
agents.defaults.model.primary
string
default:"openrouter/anthropic/claude-sonnet-4"
Primary model identifier. Format: provider/model-name or just model-name if using the default provider.
default_provider
string
default:"openrouter"
Default provider when model identifier doesn’t specify one.
default_model
string
Legacy fallback model name (prefer agents.defaults.model.primary).

Temperature Configuration

Control model creativity and randomness:
{
  "default_temperature": 0.7
}
default_temperature
number
default:"0.7"
Sampling temperature (0.0 to 1.0). Lower values are more deterministic, higher values more creative.
reasoning_effort
string
Optional reasoning effort level for compatible models (e.g., low, medium, high).

Multiple Provider Configuration

You can configure multiple providers for fallback and redundancy:
{
  "models": {
    "providers": {
      "openrouter": {
        "api_key": "sk-or-v1-..."
      },
      "openai": {
        "api_key": "sk-..."
      },
      "anthropic": {
        "api_key": "sk-ant-..."
      }
    }
  },
  "reliability": {
    "fallback_providers": ["openai", "anthropic"],
    "provider_retries": 2,
    "provider_backoff_ms": 500
  }
}
reliability.fallback_providers
array
Ordered list of fallback provider names to try if the primary fails.
reliability.provider_retries
number
default:"2"
Number of retry attempts per provider before moving to fallback.
reliability.provider_backoff_ms
number
default:"500"
Milliseconds to wait between retry attempts (exponential backoff).

Supported Providers

NullClaw supports these providers out of the box:
  • openrouter — Multi-provider router (recommended)
  • openai — GPT-4, GPT-3.5, and embedding models
  • anthropic — Claude 3/4 family
  • google — Gemini models
  • mistral — Mistral AI models
  • groq — Fast inference for open models
  • ollama — Local model hosting
  • cohere — Cohere models
  • together — Together AI models
  • And 40+ more compatible providers

Environment Variable Overrides

You can set API keys via environment variables instead of config file:
export OPENROUTER_API_KEY="sk-or-v1-..."
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
Environment variables take precedence over config file values.

Example: Local Model with Ollama

Run models locally using Ollama:
{
  "models": {
    "providers": {
      "ollama": {
        "base_url": "http://localhost:11434"
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "ollama/llama3"
      }
    }
  },
  "default_provider": "ollama"
}
No API key is required for Ollama. Just ensure the Ollama server is running locally.