Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nullclaw/nullclaw/llms.txt

Use this file to discover all available pages before exploring further.

NullClaw’s memory system stores conversation history, facts, and context. Configure the storage backend, vector search, and memory lifecycle policies.

Memory Profiles

Memory profiles provide preset configurations for common use cases:
{
  "memory": {
    "profile": "markdown_only"
  }
}
memory.profile
string
default:"markdown_only"
Memory profile preset:
  • markdown_only — File-based markdown memory (default, zero setup)
  • local_keyword — SQLite keyword-only search
  • local_hybrid — SQLite + vector search hybrid
  • postgres_keyword — PostgreSQL keyword-only
  • postgres_hybrid — PostgreSQL + vector hybrid
  • minimal_none — Stateless, no persistent memory
  • custom — Manual configuration (no profile defaults)

Backend Configuration

Basic Backend Setup

{
  "memory": {
    "backend": "markdown",
    "auto_save": true,
    "citations": "auto"
  }
}
memory.backend
string
default:"markdown"
Memory backend: markdown (file-based), sqlite (local database), postgres (PostgreSQL), redis, api, or none.
memory.auto_save
boolean
default:"true"
Automatically save memory entries after each interaction.
memory.citations
string
default:"auto"
Citation style: auto (show when relevant), always, or never.

Vector Search Configuration

Enable semantic search with vector embeddings:
{
  "memory": {
    "search": {
      "enabled": true,
      "provider": "openai",
      "model": "text-embedding-3-small",
      "dimensions": 1536,
      "fallback_provider": "none",
      "store": {
        "kind": "auto",
        "qdrant_url": "",
        "qdrant_api_key": "",
        "qdrant_collection": "nullclaw_memories",
        "pgvector_table": "memory_embeddings"
      }
    }
  }
}
memory.search.enabled
boolean
default:"true"
Enable vector search capabilities.
memory.search.provider
string
default:"none"
Embedding provider: openai, cohere, voyage, ollama, or none (disables vector search).
memory.search.model
string
default:"text-embedding-3-small"
Embedding model identifier.
memory.search.dimensions
number
default:"1536"
Embedding vector dimensions (must match model output).
memory.search.store.kind
string
default:"auto"
Vector store backend: auto (matches memory backend), sidecar (local file), qdrant, or pgvector.
Combine keyword and vector search:
{
  "memory": {
    "search": {
      "query": {
        "max_results": 6,
        "min_score": 0.0,
        "merge_strategy": "rrf",
        "rrf_k": 60,
        "hybrid": {
          "enabled": true,
          "vector_weight": 0.7,
          "text_weight": 0.3,
          "candidate_multiplier": 4
        }
      }
    }
  }
}
memory.search.query.hybrid.enabled
boolean
default:"false"
Enable hybrid search combining keyword and vector results.
memory.search.query.hybrid.vector_weight
number
default:"0.7"
Weight for vector search results (0.0 to 1.0).
memory.search.query.hybrid.text_weight
number
default:"0.3"
Weight for keyword search results (0.0 to 1.0).
memory.search.query.merge_strategy
string
default:"rrf"
Result merging strategy: rrf (Reciprocal Rank Fusion) or score (weighted scores).

Memory Lifecycle

Configure memory archival and retention:
{
  "memory": {
    "lifecycle": {
      "hygiene_enabled": true,
      "archive_after_days": 7,
      "purge_after_days": 30,
      "conversation_retention_days": 30,
      "snapshot_enabled": false,
      "auto_hydrate": true
    }
  }
}
memory.lifecycle.hygiene_enabled
boolean
default:"true"
Enable automatic memory hygiene (archival and purging).
memory.lifecycle.archive_after_days
number
default:"7"
Archive memories older than this many days.
memory.lifecycle.purge_after_days
number
default:"30"
Permanently delete memories older than this many days.
memory.lifecycle.conversation_retention_days
number
default:"30"
Retain conversation context for this many days.
memory.lifecycle.auto_hydrate
boolean
default:"true"
Automatically load archived memories when referenced.

PostgreSQL Backend

Use PostgreSQL for distributed deployments:
{
  "memory": {
    "backend": "postgres",
    "postgres": {
      "url": "postgresql://user:pass@localhost:5432/nullclaw",
      "schema": "public",
      "table": "memories",
      "connect_timeout_secs": 30
    },
    "search": {
      "provider": "openai",
      "store": {
        "kind": "pgvector",
        "pgvector_table": "memory_embeddings"
      }
    }
  }
}
memory.postgres.url
string
required
PostgreSQL connection URL.
memory.postgres.schema
string
default:"public"
Database schema name.
memory.postgres.table
string
default:"memories"
Table name for memory storage.

Redis Backend

Use Redis for caching and fast retrieval:
{
  "memory": {
    "backend": "redis",
    "redis": {
      "host": "127.0.0.1",
      "port": 6379,
      "password": "",
      "db_index": 0,
      "key_prefix": "nullclaw",
      "ttl_seconds": 0
    }
  }
}
memory.redis.host
string
default:"127.0.0.1"
Redis server hostname.
memory.redis.port
number
default:"6379"
Redis server port.
memory.redis.key_prefix
string
default:"nullclaw"
Prefix for all Redis keys.
memory.redis.ttl_seconds
number
default:"0"
Time-to-live for entries (0 = no expiry).

Advanced Configuration

Chunking and Sync

{
  "memory": {
    "search": {
      "chunking": {
        "max_tokens": 512,
        "overlap": 64
      },
      "sync": {
        "mode": "best_effort",
        "embed_timeout_ms": 15000,
        "vector_timeout_ms": 5000,
        "embed_max_retries": 2,
        "vector_max_retries": 2
      }
    }
  }
}

Response Cache

{
  "memory": {
    "response_cache": {
      "enabled": false,
      "ttl_minutes": 60,
      "max_entries": 5000
    }
  }
}
memory.response_cache.enabled
boolean
default:"false"
Cache LLM responses for identical queries.

Example: Local Hybrid Setup

Complete configuration for local SQLite with vector search:
{
  "memory": {
    "profile": "local_hybrid",
    "backend": "sqlite",
    "auto_save": true,
    "search": {
      "enabled": true,
      "provider": "openai",
      "model": "text-embedding-3-small",
      "dimensions": 1536,
      "store": {
        "kind": "sidecar"
      },
      "query": {
        "max_results": 6,
        "hybrid": {
          "enabled": true,
          "vector_weight": 0.7,
          "text_weight": 0.3
        }
      }
    },
    "lifecycle": {
      "hygiene_enabled": true,
      "archive_after_days": 7,
      "purge_after_days": 30
    }
  }
}
The local_hybrid profile automatically applies these defaults. You only need to override specific values.