Mem0 Memory
Mem0 memory backend for OpenClaw. Platform or self-hosted open-source long-term memory.
Tags: memory, cloud, self-hosted
Use Cases
- Personal AI assistant that remembers preferences, project context, and decisions across sessions
- Self-hosted memory backend for privacy-sensitive deployments using Ollama + Qdrant
- Multi-user applications where each user needs isolated memory via different userIds
- Development assistant that maintains context about codebase decisions and debugging sessions
- Customer-facing bot that remembers individual user preferences and history
Tips
- Install in 30 seconds: 'openclaw plugins install @mem0/openclaw-mem0' then add your API key
- Always set a unique userId — don't leave it as 'default' or all conversations share memory
- For privacy, use self-hosted mode with Ollama for embeddings and Qdrant for vector storage
- Use the scope parameter on memory_search to query 'long-term', 'session', or 'all' memories
- Run 'openclaw mem0 stats' to monitor memory usage and growth
- Enable enableGraph for entity relationship tracking (cloud mode only)
- Use customInstructions to control what types of information get extracted
Known Issues & Gotchas
- Cloud mode sends all conversation data to Mem0 servers — evaluate privacy implications
- The userId field is something YOU define (any string), not something you look up in the Mem0 dashboard
- Default userId is 'default' which means all users share the same memory space — always set a unique userId
- Self-hosted mode defaults to OpenAI for embeddings — costs money unless you configure Ollama
- Auto-capture processes every turn, which can get expensive on high-volume deployments with cloud LLMs
- Memory doesn't automatically decay — old irrelevant memories may clutter recall over time
- The Reddit community rates it as a privacy concern due to cloud data transmission
Alternatives
- Supermemory
- Engram Memory
- QMD (built-in)
- MemWyre Memory
Community Feedback
After the recent launch of OpenClaw, I noticed many developers started complaining about the default memory. So, I built the Mem0 memory plugin that gives your AI agents persistent memory across sessions, setup in less than 30 seconds.
— Mem0 Blog
Mem0 with local Ollama embeddings is a solid fit for OpenClaw — it adds persistent, semantically searchable memory while keeping everything on your infrastructure.
— Reddit r/openclaw
B tier — Mem0. Great automation, kills your privacy and costs up to 7 cents per message.
— Reddit r/clawdbot
Frequently Asked Questions
Can I self-host Mem0 without sending data to the cloud?
Yes. Set mode to 'open-source' and bring your own embedder, vector store, and LLM. You can use Ollama for embeddings, Qdrant for vectors, and any OpenAI-compatible LLM. No Mem0 API key needed for self-hosted deployments.
How much does Mem0 cost?
Cloud mode requires a Mem0 API key from app.mem0.ai — pricing varies by usage. Self-hosted mode is free but requires your own LLM/embedding costs (free with Ollama). One Reddit reviewer estimated cloud costs at up to 7 cents per message for heavy usage.
What's the difference between long-term and short-term memory?
Long-term memories are user-scoped and persist across all sessions — your name, preferences, project structure, decisions. Short-term memories are session-scoped and track what you're actively working on. Both are searched during recall, with long-term memories surfaced first.
What is the userId field?
The userId is a string YOU choose to uniquely identify the user whose memories are stored. It's not something you look up — you define it yourself (e.g., 'alice', an email, or a UUID). Different userIds create separate memory namespaces.
Does memory survive context compaction?
Yes. Unlike markdown-based memory in the context window, Mem0 stores memories externally. Compaction, token limits, and session restarts don't affect stored memories. Auto-Recall re-injects relevant memories fresh on every turn.
How does Mem0 compare to Supermemory and Engram?
Mem0 sits between the two: it offers both cloud and self-hosted options (Supermemory is cloud-only, Engram is local-only). Supermemory has better benchmarks and temporal decay. Engram has more features and stores plain markdown. Mem0 is the versatile middle ground.
What memory tools does the agent get?
Five tools: memory_search (semantic query), memory_list (list all memories), memory_store (explicitly save a fact), memory_get (retrieve by ID), and memory_forget (delete by ID or query). Each supports a scope parameter for long-term, session, or all memories.
Configuration Examples
Cloud mode (Mem0 Platform)
{
"plugins": {
"entries": {
"openclaw-mem0": {
"enabled": true,
"config": {
"apiKey": "${MEM0_API_KEY}",
"userId": "alice",
"autoRecall": true,
"autoCapture": true,
"topK": 5
}
}
}
}
}Self-hosted with Ollama + Qdrant
{
"plugins": {
"entries": {
"openclaw-mem0": {
"enabled": true,
"config": {
"mode": "open-source",
"userId": "alice",
"oss": {
"embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
"vectorStore": { "provider": "qdrant", "config": { "host": "localhost", "port": 6333 } },
"llm": { "provider": "openai", "config": { "model": "gpt-4o" } }
}
}
}
}
}
}CLI memory commands
# Search all memories
openclaw mem0 search "what languages does the user know"
# Search only long-term memories
openclaw mem0 search "user preferences" --scope long-term
# View stats
openclaw mem0 statsInstallation
openclaw plugins install @mem0/openclaw-mem0