The Science Behind ZenBrain
15 neuroscience-inspired mechanisms across 7 memory layers. 11,589 automated tests. Open-access preprint on arXiv. All documented.
โMemory is not storage โ it is a living process of forgetting, consolidation, and rediscovery. We translated this process into software.โ
Published Research
Our technical paper is publicly available as an open-access preprint on Zenodo (CERN) and arXiv, with a defensive disclosure on Elsevier TDCommons.
ZenBrain: A Neuroscience-Inspired 7-Layer Memory Architecture for Autonomous AI Systems
We present ZenBrain, a neuroscience-inspired 7-layer memory architecture integrating 15 algorithms grounded in peer-reviewed neuroscience: 9 foundational components plus a Predictive Memory Architecture (PMA) with NeuromodulatorEngine, ReconsolidationEngine, TripleCopyMemory, PriorityMap, StabilityProtector, and MetacognitiveMonitor. Evaluated across ten experiments on LoCoMo, MemoryAgentBench, MemoryArena, and the LongMemEval-S Full-500 replication. On LongMemEval-500, ZenBrain wins 12 of 12 head-to-head judge comparisons against Letta, Mem0, and A-Mem under three independent LLM judges (Bonferroni-corrected p โค 6.2e-31). Under the official binary judge, ZenBrain reaches 91.3 % of long-context-oracle accuracy at 1/106แตสฐ the per-query token budget โ the oracle beats ZenBrain by only 4.5 pp while using ~106ร more tokens and no memory architecture. Sleep consolidation: +37 % stability, โ47.4 % storage. TripleCopyMemory retains 91.2 % strength at 30 days. The full 15-algorithm ablation reveals a cooperative survival network where 9 of 15 algorithms become individually critical under stress (decay=0.25, 60 days). 95 reproducible experiment tests, seeded PRNG. 11,589 total tests | 322K LOC | Phase 145 | 60 AI tools.
What sets ZenBrain apart
Eight technologies that we are not aware of being shipped together in any other production memory system today โ commercial or open source.
Sleep Consolidation Engine
Idle-time memory consolidation in production โ inspired by hippocampal replay (Stickgold & Walker 2013). Weak connections are pruned, stable ones strengthened.
We are not aware of any competing memory system shipping sleep-time consolidation; the closest published designs remain proposals.
7-Layer Memory Coordinator
Unified orchestrator for 7 memory layers in production โ from working memory to long-term storage. Based on Global Workspace Theory (Baars 1988).
Mem0 ships 2 layers, Letta 3, Zep 2. ZenBrain ships 7 โ among the deepest memory architectures in open source today.
A-RAG (Autonomous Retrieval Agent)
Meta-agent that creates retrieval plans before any search is executed. Heuristic-first with LLM fallback, max 4 dependent steps.
Heuristic-based planning before retrieval โ no LLM cost for simple queries; few production memory systems expose a dedicated planning layer.
Multi-Agent Debate Protocol
Structured 3-round debate protocol when agents disagree. Challenge โ Response โ Resolution with automatic escalation.
Structured 3-round debate protocol โ rare in production multi-agent systems.
Curiosity Engine
Automatic knowledge gap detection with quantified gap score. Analyzes query history, fact density, and confidence โ recommends targeted actions.
Few memory systems implement systematic curiosity-driven gap detection. Inspired by Loewenstein's information gap theory (1994).
Prediction Engine
User intent prediction from temporal and sequential patterns. Learns from prediction errors โ the more often wrong, the better the next prediction.
We do not see competitors predicting user intent from behavioral patterns within the memory layer itself.
HyperAgent L0โL2
3-level recursive self-improvement with formal safety bounds. Level 0 optimizes knowledge, Level 1 optimizes Level-0 strategies, Level 2 optimizes Level-1 parameters.
3-level recursion with immutable core properties, daily budgets, and automatic rollback on quality regression โ rare in production.
Cross-Context Entity Merging
Detection and merging of entities across 4 isolated contexts (Operations, Finance, People, Strategy). Bayesian confidence updates on conflicts.
We do not see competing memory systems managing entity identity across isolated contexts.
HiMeS โ 7 Layers of Neuroscience-Inspired Memory
Inspired by the Atkinson-Shiffrin model (1968) and modern cognitive science.

Working Memory
Active focus โ 7ยฑ2 items per Miller's Magical Number. Fastest access, shortest lifespan.
Phase 125Short-Term Memory
Session context and conversation continuity. Survives the current session.
Phase 125Episodic
Concrete experiences with emotional tagging. 400+ keyword lexicon (DE+EN) for arousal/valence scoring.
Phase 125Semantic
Factual knowledge with FSRS scheduling. Spaced repetition optimizes recall timing โ 30% better than SM-2.
Phase 125Procedural
Workflows and skills. Tool chains are analyzed and optimized.
Phase 127Core Memory
Immutable foundations following the Letta pattern. Pinned facts that are never forgotten.
Phase 126Long-Term Memory
Persistent knowledge with Ebbinghaus forgetting curve, Hebbian reinforcement, and Bayesian confidence.
Phase 125โAny AI that cannot forget will eventually drown in its own noise. Selective forgetting is not a weakness โ it is the foundation of intelligence.โ
15 Algorithms Inspired by Neuroscience
FSRS Spaced Repetition
open-spaced-repetition/fsrs4ankiOptimal review timing at ~90% target retention
Ebbinghaus Forgetting Curve
Ebbinghaus (1885)Exponential decay with configurable half-life
Hebbian Learning
Hebb (1949)Co-activated facts strengthen connections (ร1.09/activation)
Homeostatic Normalization
Turrigiano (2004)Prevents runaway growth of edge weights
Sleep Consolidation
Stickgold & Walker (2013)Hippocampal replay with +50% stability boost
Synaptic Homeostasis
Tononi & Cirelli (2006)Weak connections pruned during sleep
Emotional Modulation
LeDoux (1996)Emotional memories decay 2.7ร slower
Bayesian Propagation
Pearl (1988)Confidence updates across the entire knowledge graph
Global Workspace Theory
Baars (1988)Conscious access through competitive context assembly
Information Gain Scoring
Shannon (1948)Entropy-based prioritization of new facts
Knowledge Gap Theory
Loewenstein (1994)Systematic detection of missing knowledge
Working Memory Capacity
Miller (1956)7ยฑ2 active items in working memory
Neuromodulation (PMA)
Schultz (1997) ยท Aston-Jones (2005)Dopamine, NE, 5-HT, ACh โ four channels with tonic + phasic dynamics
Reconsolidation (PMA)
Nader (2000) ยท Schiller (2010)Memory destabilizes on retrieval โ four PE-gated update modes with rollback
Triple-Copy Memory (PMA)
Squire & Bayley (2007)Three traces with divergent dynamics: fast (4h), medium (14d), deep (logarithmic)
Intelligent Knowledge Retrieval
6 strategies, dynamically selected per query. Not one pipeline โ an adaptive system.
End-to-end RAG pipeline
A-RAG Planning
Meta-agent plans retrieval steps before execution. Heuristic-first, LLM fallback.
GraphRAG 3-Layer
Event subgraph + semantic graph + community summaries. 5 parallel strategies.
Self-RAG Critique
Automatic reformulation at confidence < 0.5. 4-component scoring.
HyDE Retrieval
Hypothetical answer โ embedding โ search. Auto-detection with 5s timeout.
Contextual Retrieval
Chunk enrichment per Anthropic method. +67% retrieval accuracy.
Embedding Drift Detection
BullMQ worker monitors drift >10%. Automatic cache invalidation.
Autonomous Agents with Safety Bounds
Multi-agent orchestration with structured debate, dynamic team composition, and recursive self-improvement.
Multi-agent team architecture
Debate Protocol
3-round debate on disagreement. Challenge โ Response โ Resolution.
Dynamic Team Builder
5 specialist agents, automatically composed by task type.
HyperAgent L0โL2
Recursive self-improvement with daily budgets, sandbox tests, and auto-rollback.
Persistent Agent Loops
Pause/resume with state checkpointing. Long-running tasks over days.
A2A Protocol
Agent-to-agent communication per Google standard. /.well-known/agent.json discovery.
Implicit Feedback
Automatic behavior detection without explicit labeling.
AI That Thinks About Thinking
Curiosity, prediction, metacognition โ three pillars of cognitive intelligence.
Curiosity Engine
Quantified gap score: query frequency ร fact density ร confidence ร RAG quality.
Prediction Engine
Temporal + sequential pattern recognition. Learns from prediction errors.
Metacognition
Confidence calibration, confusion detection, capability profiling.
Adaptive Thinking
4-tier thinking budgets: 1Kโ16Kโ64Kโ128K tokens. Auto-detection. 60-80% cost savings.
91.3% accuracy at 1/106แตสฐ the tokens
On LongMemEval-500, ZenBrain reaches 91.3% of long-context-oracle accuracy at a per-query token budget of 1/106 โ the Pareto position for AI memory.

ZenBrain vs. Competitors
Factual capability comparison โ as of March 2026.
| Feature | ZenBrain | Mem0 | Letta | Zep | LangChain |
|---|---|---|---|---|---|
| Memory Layers | 7 | 2 | 3 | 2 | 1 |
| Spaced Repetition (FSRS) | โ | โ | โ | โ | โ |
| Emotional Memory | โ | โ | โ | โ | โ |
| Hebbian KG Edges | โ | โ | โ | โ | โ |
| Sleep Consolidation | โ | โ | โ | โ | โ |
| Bayesian Confidence | โ | โ | โ | โ | โ |
| Graph Reasoning | โ | โ | โ | โ | โ |
| 3-Layer GraphRAG | โ | โ | โ | โ | โ |
| Agentic RAG | โ | โ | โ | โ | โ |
| Debate Protocol | โ | โ | โ | โ | โ |
| Curiosity Engine | โ | โ | โ | โ | โ |
| Prediction Engine | โ | โ | โ | โ | โ |
| HyperAgent L0โL2 | โ | โ | โ | โ | โ |
| Multi-Context Isolation | โ | โ | โ | โ | โ |
| Open Source | โ | โ | โ | โ | โ |
| Tests | 11,589 | โ | โ | โ | โ |
Explore the Code
ZenBrain is open source. All algorithms, all tests, all documentation โ openly available.