The OpenAI provider supports all OpenAI chat models including GPT-4o, GPT-5, o1, o3, o4, and reasoning models.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/nullclaw/nullclaw/llms.txt
Use this file to discover all available pages before exploring further.
Configuration
Set to
"openai"OpenAI API key. Get yours at platform.openai.com/api-keys
Custom base URL for proxies (defaults to
https://api.openai.com/v1/chat/completions)Model name:
gpt-4o, gpt-5, o1, o3-mini, etc.Sampling temperature (0.0-2.0). Defaults to
0.7. Omitted for reasoning models unless reasoning_effort: "none".Maximum output tokens. For reasoning models (o1, o3, gpt-5), uses
max_completion_tokens field.Reasoning effort for o1/o3/gpt-5 models:
"low", "medium", "high", or "none" to enable temperature.Example Configuration
Reasoning Models
For o1, o3, o4, and gpt-5 models:- Temperature is automatically omitted (unless
reasoning_effort: "none") - Uses
max_completion_tokensinstead ofmax_tokensin API request - Supports
reasoning_effortparameter (low,medium,high)
Supported Models
- GPT-4o:
gpt-4o,gpt-4o-mini,gpt-4o-2024-05-13 - GPT-5:
gpt-5,gpt-5.1,gpt-5.2-turbo - o-series (reasoning):
o1,o1-mini,o1-preview,o3,o3-mini,o4-mini - Codex:
codex-mini,codex-mini-latest - Legacy:
gpt-4,gpt-4-turbo,gpt-3.5-turbo
Capabilities
| Feature | Support |
|---|---|
| Streaming | Yes |
| Function Calling | Yes |
| Vision (images) | Yes (gpt-4o) |
| System Messages | Yes |
| Tool Calls | Yes |
Authentication
The OpenAI provider uses Bearer token authentication:api_keyfield in configOPENAI_API_KEYenvironment variable
Custom Base URL
For proxies or custom endpoints:Code Example
Fromsrc/providers/openai.zig:
Error Handling
The provider classifies common OpenAI API errors:error.RateLimited— 429 rate limit exceedederror.ContextLengthExceeded— Context window too largeerror.InvalidApiKey— Authentication failederror.ApiError— Generic API errorerror.NoResponseContent— Empty response