Inside-Out-2 Memory

Separate memory manager that turns session files into memories → beliefs → evolving self model. Inspired by Inside Out 2.

The Inside-Out-2 Memory project reimagines how AI agents maintain continuity between sessions by borrowing concepts from Pixar's Inside Out 2. Instead of flat file-based memory, this system introduces a layered architecture: raw session transcripts are processed into discrete 'memories,' which aggregate into 'beliefs,' which together form an evolving self-model that the agent references when making decisions. The architecture mirrors the film's emotional framework. Individual memories carry emotional weight and context tags. Over time, repeated patterns coalesce into beliefs — stable convictions about the user's preferences, communication style, and priorities. The self-model layer sits on top, representing the agent's understanding of its own role, capabilities, and relationship with the user. This creates an agent that doesn't just remember facts but develops genuine understanding. The practical impact is significant for long-running OpenClaw deployments. Rather than the agent starting each session by re-reading flat memory files, it consults a structured belief system that captures the essence of thousands of past interactions. Beliefs can be challenged and updated, old memories can fade in relevance, and the self-model evolves — creating something closer to how human memory actually works.

Tags: memory, beliefs, self-model

Category: knowledge

Tips

  • Start with the default MEMORY.md approach and only migrate to this system once you have enough session history to form meaningful beliefs
  • Keep the belief formation threshold high — a pattern should appear across multiple sessions before becoming a belief
  • Regularly review generated beliefs for accuracy — incorrect beliefs compound over time and can skew agent behavior
  • Use the self-model layer to encode your agent's personality boundaries, not just factual knowledge about the user
  • Back up the belief database regularly — losing a well-developed belief system is worse than losing raw session logs

Community Feedback

Separate memory manager that turns session files into memories → beliefs → evolving self model. Inspired by Inside Out 2 — the most creative approach to agent memory I've seen.

— OpenClaw Showcase

The belief layer is what makes this special. Instead of the agent re-reading everything, it builds convictions over time. Much closer to how humans actually remember things.

— OpenClaw Community

Frequently Asked Questions

How is this different from OpenClaw's built-in MEMORY.md?

MEMORY.md is a flat file the agent reads each session. This system adds structure: memories have emotional weight and decay, beliefs are stable convictions formed from patterns, and the self-model represents the agent's understanding of its role. It's layered cognition vs. a notebook.

Does this increase token usage significantly?

Actually, it can reduce tokens over time. Instead of reading lengthy raw memory files, the agent consults a compact belief system that captures the essence of many interactions. The processing cost is in the background belief-formation step, not in every conversation.

Can beliefs be manually overridden?

Yes. You can edit the belief store directly or tell the agent to update a specific belief. This is important for correcting misconceptions that may have formed from ambiguous interactions.

Is this compatible with multi-agent setups?

Each agent can have its own belief system, or multiple agents can share a common belief store about the user. The latter is more complex but creates consistent behavior across your agent team.