Mem0 Memory

Mem0 memory backend for OpenClaw. Platform or self-hosted open-source long-term memory.

Mem0 is a memory plugin for OpenClaw that provides persistent long-term memory across sessions, available as both a cloud service (Mem0 Platform) and a self-hosted open-source deployment. Built by the Mem0 team (who have been building AI memory infrastructure for 3+ years), it automatically watches conversations, extracts what matters, and brings relevant context back when needed — so your agent remembers your name, preferences, projects, and past decisions even after restarts or context compaction. The plugin runs two processes on every conversation turn: Auto-Recall searches Mem0 for memories relevant to the current message before the agent responds, and Auto-Capture sends each exchange to Mem0's extraction layer after the agent responds. Mem0 determines what's worth persisting — new facts get stored, outdated ones get updated, duplicates get merged. Both run silently with no manual configuration required. Memories are organized into two scopes: long-term (user-scoped, persists across all sessions) and short-term (session-scoped, tracks what you're actively working on). During recall, both scopes are searched with long-term memories surfaced first. The agent also gets five explicit tools: memory_search, memory_list, memory_store, memory_get, and memory_forget. The self-hosted option is what makes Mem0 unique among cloud memory plugins — you can bring your own embedder (OpenAI, Ollama), vector store (Qdrant, in-memory), and LLM, keeping everything on your infrastructure. This positions Mem0 as a middle ground between Supermemory's pure cloud approach and Engram's local-only architecture. With nearly 2,000 weekly downloads and comprehensive documentation at docs.mem0.ai, it's a well-supported option in the OpenClaw memory ecosystem.

Tags: memory, cloud, self-hosted

Use Cases

  • Personal AI assistant that remembers preferences, project context, and decisions across sessions
  • Self-hosted memory backend for privacy-sensitive deployments using Ollama + Qdrant
  • Multi-user applications where each user needs isolated memory via different userIds
  • Development assistant that maintains context about codebase decisions and debugging sessions
  • Customer-facing bot that remembers individual user preferences and history

Tips

  • Install in 30 seconds: 'openclaw plugins install @mem0/openclaw-mem0' then add your API key
  • Always set a unique userId — don't leave it as 'default' or all conversations share memory
  • For privacy, use self-hosted mode with Ollama for embeddings and Qdrant for vector storage
  • Use the scope parameter on memory_search to query 'long-term', 'session', or 'all' memories
  • Run 'openclaw mem0 stats' to monitor memory usage and growth
  • Enable enableGraph for entity relationship tracking (cloud mode only)
  • Use customInstructions to control what types of information get extracted

Known Issues & Gotchas

  • Cloud mode sends all conversation data to Mem0 servers — evaluate privacy implications
  • The userId field is something YOU define (any string), not something you look up in the Mem0 dashboard
  • Default userId is 'default' which means all users share the same memory space — always set a unique userId
  • Self-hosted mode defaults to OpenAI for embeddings — costs money unless you configure Ollama
  • Auto-capture processes every turn, which can get expensive on high-volume deployments with cloud LLMs
  • Memory doesn't automatically decay — old irrelevant memories may clutter recall over time
  • The Reddit community rates it as a privacy concern due to cloud data transmission

Alternatives

  • Supermemory
  • Engram Memory
  • QMD (built-in)
  • MemWyre Memory

Community Feedback

After the recent launch of OpenClaw, I noticed many developers started complaining about the default memory. So, I built the Mem0 memory plugin that gives your AI agents persistent memory across sessions, setup in less than 30 seconds.

— Mem0 Blog

Mem0 with local Ollama embeddings is a solid fit for OpenClaw — it adds persistent, semantically searchable memory while keeping everything on your infrastructure.

— Reddit r/openclaw

B tier — Mem0. Great automation, kills your privacy and costs up to 7 cents per message.

— Reddit r/clawdbot

Frequently Asked Questions

Can I self-host Mem0 without sending data to the cloud?

Yes. Set mode to 'open-source' and bring your own embedder, vector store, and LLM. You can use Ollama for embeddings, Qdrant for vectors, and any OpenAI-compatible LLM. No Mem0 API key needed for self-hosted deployments.

How much does Mem0 cost?

Cloud mode requires a Mem0 API key from app.mem0.ai — pricing varies by usage. Self-hosted mode is free but requires your own LLM/embedding costs (free with Ollama). One Reddit reviewer estimated cloud costs at up to 7 cents per message for heavy usage.

What's the difference between long-term and short-term memory?

Long-term memories are user-scoped and persist across all sessions — your name, preferences, project structure, decisions. Short-term memories are session-scoped and track what you're actively working on. Both are searched during recall, with long-term memories surfaced first.

What is the userId field?

The userId is a string YOU choose to uniquely identify the user whose memories are stored. It's not something you look up — you define it yourself (e.g., 'alice', an email, or a UUID). Different userIds create separate memory namespaces.

Does memory survive context compaction?

Yes. Unlike markdown-based memory in the context window, Mem0 stores memories externally. Compaction, token limits, and session restarts don't affect stored memories. Auto-Recall re-injects relevant memories fresh on every turn.

How does Mem0 compare to Supermemory and Engram?

Mem0 sits between the two: it offers both cloud and self-hosted options (Supermemory is cloud-only, Engram is local-only). Supermemory has better benchmarks and temporal decay. Engram has more features and stores plain markdown. Mem0 is the versatile middle ground.

What memory tools does the agent get?

Five tools: memory_search (semantic query), memory_list (list all memories), memory_store (explicitly save a fact), memory_get (retrieve by ID), and memory_forget (delete by ID or query). Each supports a scope parameter for long-term, session, or all memories.

Configuration Examples

Cloud mode (Mem0 Platform)

{
  "plugins": {
    "entries": {
      "openclaw-mem0": {
        "enabled": true,
        "config": {
          "apiKey": "${MEM0_API_KEY}",
          "userId": "alice",
          "autoRecall": true,
          "autoCapture": true,
          "topK": 5
        }
      }
    }
  }
}

Self-hosted with Ollama + Qdrant

{
  "plugins": {
    "entries": {
      "openclaw-mem0": {
        "enabled": true,
        "config": {
          "mode": "open-source",
          "userId": "alice",
          "oss": {
            "embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
            "vectorStore": { "provider": "qdrant", "config": { "host": "localhost", "port": 6333 } },
            "llm": { "provider": "openai", "config": { "model": "gpt-4o" } }
          }
        }
      }
    }
  }
}

CLI memory commands

# Search all memories
openclaw mem0 search "what languages does the user know"

# Search only long-term memories
openclaw mem0 search "user preferences" --scope long-term

# View stats
openclaw mem0 stats

Installation

openclaw plugins install @mem0/openclaw-mem0