Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nullclaw/nullclaw/llms.txt

Use this file to discover all available pages before exploring further.

The OpenAI provider supports all OpenAI chat models including GPT-4o, GPT-5, o1, o3, o4, and reasoning models.

Configuration

provider
string
required
Set to "openai"
api_key
string
required
OpenAI API key. Get yours at platform.openai.com/api-keys
base_url
string
Custom base URL for proxies (defaults to https://api.openai.com/v1/chat/completions)
model
string
required
Model name: gpt-4o, gpt-5, o1, o3-mini, etc.
temperature
number
Sampling temperature (0.0-2.0). Defaults to 0.7. Omitted for reasoning models unless reasoning_effort: "none".
max_tokens
number
Maximum output tokens. For reasoning models (o1, o3, gpt-5), uses max_completion_tokens field.
reasoning_effort
string
Reasoning effort for o1/o3/gpt-5 models: "low", "medium", "high", or "none" to enable temperature.

Example Configuration

{
  "provider": "openai",
  "model": "gpt-4o",
  "api_key": "sk-proj-...",
  "temperature": 0.7
}

Reasoning Models

For o1, o3, o4, and gpt-5 models:
{
  "provider": "openai",
  "model": "o3-mini",
  "api_key": "sk-proj-...",
  "max_tokens": 100,
  "reasoning_effort": "medium"
}
Key Differences:
  • Temperature is automatically omitted (unless reasoning_effort: "none")
  • Uses max_completion_tokens instead of max_tokens in API request
  • Supports reasoning_effort parameter (low, medium, high)

Supported Models

  • GPT-4o: gpt-4o, gpt-4o-mini, gpt-4o-2024-05-13
  • GPT-5: gpt-5, gpt-5.1, gpt-5.2-turbo
  • o-series (reasoning): o1, o1-mini, o1-preview, o3, o3-mini, o4-mini
  • Codex: codex-mini, codex-mini-latest
  • Legacy: gpt-4, gpt-4-turbo, gpt-3.5-turbo

Capabilities

FeatureSupport
StreamingYes
Function CallingYes
Vision (images)Yes (gpt-4o)
System MessagesYes
Tool CallsYes

Authentication

The OpenAI provider uses Bearer token authentication:
Authorization: Bearer sk-proj-...
API keys are read from:
  1. api_key field in config
  2. OPENAI_API_KEY environment variable

Custom Base URL

For proxies or custom endpoints:
{
  "provider": "openai",
  "model": "gpt-4o",
  "api_key": "sk-proj-...",
  "base_url": "https://proxy.example.com/v1/chat/completions"
}

Code Example

From src/providers/openai.zig:
pub const OpenAiProvider = struct {
    api_key: ?[]const u8,
    allocator: std.mem.Allocator,

    const BASE_URL = "https://api.openai.com/v1/chat/completions";

    pub fn init(allocator: std.mem.Allocator, api_key: ?[]const u8) OpenAiProvider {
        return .{
            .api_key = api_key,
            .allocator = allocator,
        };
    }

    pub fn provider(self: *OpenAiProvider) Provider {
        return .{
            .ptr = @ptrCast(self),
            .vtable = &vtable,
        };
    }
};

Error Handling

The provider classifies common OpenAI API errors:
  • error.RateLimited — 429 rate limit exceeded
  • error.ContextLengthExceeded — Context window too large
  • error.InvalidApiKey — Authentication failed
  • error.ApiError — Generic API error
  • error.NoResponseContent — Empty response