Inside-Out-2 Memory
Separate memory manager that turns session files into memories → beliefs → evolving self model. Inspired by Inside Out 2.
Tags: memory, beliefs, self-model
Category: knowledge
Tips
- Start with the default MEMORY.md approach and only migrate to this system once you have enough session history to form meaningful beliefs
- Keep the belief formation threshold high — a pattern should appear across multiple sessions before becoming a belief
- Regularly review generated beliefs for accuracy — incorrect beliefs compound over time and can skew agent behavior
- Use the self-model layer to encode your agent's personality boundaries, not just factual knowledge about the user
- Back up the belief database regularly — losing a well-developed belief system is worse than losing raw session logs
Community Feedback
Separate memory manager that turns session files into memories → beliefs → evolving self model. Inspired by Inside Out 2 — the most creative approach to agent memory I've seen.
— OpenClaw Showcase
The belief layer is what makes this special. Instead of the agent re-reading everything, it builds convictions over time. Much closer to how humans actually remember things.
— OpenClaw Community
Frequently Asked Questions
How is this different from OpenClaw's built-in MEMORY.md?
MEMORY.md is a flat file the agent reads each session. This system adds structure: memories have emotional weight and decay, beliefs are stable convictions formed from patterns, and the self-model represents the agent's understanding of its role. It's layered cognition vs. a notebook.
Does this increase token usage significantly?
Actually, it can reduce tokens over time. Instead of reading lengthy raw memory files, the agent consults a compact belief system that captures the essence of many interactions. The processing cost is in the background belief-formation step, not in every conversation.
Can beliefs be manually overridden?
Yes. You can edit the belief store directly or tell the agent to update a specific belief. This is important for correcting misconceptions that may have formed from ambiguous interactions.
Is this compatible with multi-agent setups?
Each agent can have its own belief system, or multiple agents can share a common belief store about the user. The latter is more complex but creates consistent behavior across your agent team.