Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nullclaw/nullclaw/llms.txt

Use this file to discover all available pages before exploring further.

Overview

NullClaw is built on a vtable-driven pluggable architecture where every subsystem implements a common interface pattern. This design enables zero-overhead abstraction with full runtime swappability — change any component via configuration, no code changes required.
678 KB binary · <2 ms startup · 3,230+ tests · 22+ providers · 18 channels · Pluggable everything

Core Design Principles

1. Vtable Interface System

Every subsystem uses Zig’s type-erased interface pattern (vtable-based polymorphism):
pub const Provider = struct {
    ptr: *anyopaque,
    vtable: *const VTable,

    pub const VTable = struct {
        chatWithSystem: *const fn(...) anyerror![]const u8,
        chat: *const fn(...) anyerror!ChatResponse,
        supportsNativeTools: *const fn(...) bool,
        getName: *const fn(...) []const u8,
        deinit: *const fn(...) void,
        // Optional methods...
    };
};
This pattern provides:
  • Zero runtime overhead: Direct function pointers, no dynamic dispatch
  • Compile-time safety: Type checking at build time
  • Runtime flexibility: Swap implementations via config
  • No allocator overhead: Static vtables, minimal indirection

2. No Dependencies

NullClaw depends only on:
  • libc (standard C library)
  • sqlite (optional, for memory backends)
No VM, no runtime, no framework. The entire binary is 678 KB in release mode.

3. Static Binary

The entire runtime compiles to a single static binary:
  • Boots in <2 ms on Apple Silicon
  • Runs on $5 hardware (any ARM/x86/RISC-V board)
  • Deploys with scp — no installation, no package manager

Subsystem Table

Every subsystem implements a vtable interface:
SubsystemInterfaceShips WithExtend
AI ModelsProvider22+ providers (OpenRouter, Anthropic, OpenAI, Ollama, Venice, Groq, Mistral, xAI, DeepSeek, Together, Fireworks, Perplexity, Cohere, Bedrock, etc.)custom:https://your-api.com — any OpenAI-compatible API
ChannelsChannelCLI, Telegram, Signal, Discord, Slack, iMessage, Matrix, WhatsApp, Webhook, IRC, Lark/Feishu, OneBot, Line, DingTalk, Email, Nostr, QQ, MaixCam, MattermostAny messaging API
MemoryMemorySQLite (hybrid FTS5 + vector cosine similarity), MarkdownAny persistence backend
ToolsToolshell, file_read, file_write, file_edit, memory_store, memory_recall, memory_forget, browser_open, screenshot, composio, http_request, hardware_info, hardware_memory, and moreAny capability
ObservabilityObserverNoop, Log, File, MultiPrometheus, OTel
RuntimeRuntimeAdapterNative, Docker (sandboxed), WASM (wasmtime)Any runtime
SecuritySandboxLandlock, Firejail, Bubblewrap, Docker, auto-detectAny sandbox backend
IdentityIdentityConfigOpenClaw (markdown), AIEOS v1.1 (JSON)Any identity format
TunnelTunnelNone, Cloudflare, Tailscale, ngrok, CustomAny tunnel binary
HeartbeatEngineHEARTBEAT.md periodic tasks
SkillsLoaderTOML manifests + SKILL.md instructionsCommunity skill packs
PeripheralsPeripheralSerial, Arduino, Raspberry Pi GPIO, STM32/NucleoAny hardware interface
CronSchedulerCron expressions + one-shot timers with JSON persistence

Data Flow

Message Flow (Channel → Agent)

  1. Channel receives message (Telegram webhook, Discord WebSocket, CLI stdin, etc.)
  2. Channel vtable normalizes to ChannelMessage struct
  3. Gateway/Daemon routes to agent session
  4. Agent loop appends to conversation context
  5. Provider vtable sends to LLM API
  6. Response parsing extracts text/tool calls
  7. Tool dispatch executes via Tool vtable
  8. Memory storage persists via Memory vtable
  9. Outbound delivery via Channel.send()

Security Layers

Every request passes through:
  1. Pairing check (gateway authentication)
  2. Channel allowlist (sender validation)
  3. Workspace scoping (filesystem boundaries)
  4. Sandbox isolation (OS-level containment)
  5. Audit logging (signed event trail)
All subsystems are fail-safe by default: deny-by-default access, explicit opt-in for elevated permissions, structured error propagation.

Extension Points

Adding a New Provider

const MyProvider = struct {
    api_key: []const u8,

    pub fn chatWithSystem(
        ptr: *anyopaque,
        allocator: std.mem.Allocator,
        system_prompt: ?[]const u8,
        message: []const u8,
        model: []const u8,
        temperature: f64,
    ) anyerror![]const u8 {
        const self: *MyProvider = @ptrCast(@alignCast(ptr));
        // Implementation...
    }

    pub fn provider(self: *MyProvider) Provider {
        return .{ .ptr = @ptrCast(self), .vtable = &vtable };
    }

    pub const vtable = Provider.VTable{
        .chatWithSystem = chatWithSystem,
        .chat = chat,
        .supportsNativeTools = supportsNativeTools,
        .getName = getName,
        .deinit = deinit,
    };
};

Adding a New Channel

const MyChannel = struct {
    config: Config,

    pub fn start(ptr: *anyopaque) anyerror!void {
        const self: *MyChannel = @ptrCast(@alignCast(ptr));
        // Start listening...
    }

    pub fn send(
        ptr: *anyopaque,
        target: []const u8,
        message: []const u8,
        media: []const []const u8,
    ) anyerror!void {
        const self: *MyChannel = @ptrCast(@alignCast(ptr));
        // Send message...
    }

    pub fn channel(self: *MyChannel) Channel {
        return .{ .ptr = @ptrCast(self), .vtable = &vtable };
    }

    pub const vtable = Channel.VTable{
        .start = start,
        .stop = stop,
        .send = send,
        .name = name,
        .healthCheck = healthCheck,
    };
};

Performance Characteristics

MetricValueWhy
Binary Size678 KBZero dependencies, Zig comptime optimization
Startup Time<2 msNo runtime init, static binary
Memory Usage~1 MBNo GC, explicit allocations
Test Coverage3,230+ testsComprehensive subsystem validation
Dispatch Overhead~0 nsDirect function pointers
NullClaw’s vtable pattern has zero runtime overhead compared to direct function calls — the vtable is resolved at init time, then all calls are direct jumps.

Next Steps

Providers

Learn about AI model provider integration

Channels

Explore messaging platform channels

Tools

Understand tool execution system

Memory

Deep dive into memory backends