TECHNOLOGY

The Science Behind ZenBrain

15 neuroscience-inspired mechanisms across 7 memory layers. 11,589 automated tests. Open-access preprint on arXiv. All documented.

Alexander Bering, Founder & Developer

โ€œMemory is not storage โ€” it is a living process of forgetting, consolidation, and rediscovery. We translated this process into software.โ€

โ€” Alexander Bering

Published Research

Our technical paper is publicly available as an open-access preprint on Zenodo (CERN) and arXiv, with a defensive disclosure on Elsevier TDCommons.

PreprintOpen Access

ZenBrain: A Neuroscience-Inspired 7-Layer Memory Architecture for Autonomous AI Systems

Alexander Bering โ€” Zensation AI, Kiel, Germany

We present ZenBrain, a neuroscience-inspired 7-layer memory architecture integrating 15 algorithms grounded in peer-reviewed neuroscience: 9 foundational components plus a Predictive Memory Architecture (PMA) with NeuromodulatorEngine, ReconsolidationEngine, TripleCopyMemory, PriorityMap, StabilityProtector, and MetacognitiveMonitor. Evaluated across ten experiments on LoCoMo, MemoryAgentBench, MemoryArena, and the LongMemEval-S Full-500 replication. On LongMemEval-500, ZenBrain wins 12 of 12 head-to-head judge comparisons against Letta, Mem0, and A-Mem under three independent LLM judges (Bonferroni-corrected p โ‰ค 6.2e-31). Under the official binary judge, ZenBrain reaches 91.3 % of long-context-oracle accuracy at 1/106แต—สฐ the per-query token budget โ€” the oracle beats ZenBrain by only 4.5 pp while using ~106ร— more tokens and no memory architecture. Sleep consolidation: +37 % stability, โˆ’47.4 % storage. TripleCopyMemory retains 91.2 % strength at 30 days. The full 15-algorithm ablation reveals a cooperative survival network where 9 of 15 algorithms become individually critical under stress (decay=0.25, 60 days). 95 reproducible experiment tests, seeded PRNG. 11,589 total tests | 322K LOC | Phase 145 | 60 AI tools.

Download PDF

What sets ZenBrain apart

Eight technologies that we are not aware of being shipped together in any other production memory system today โ€” commercial or open source.

Sleep Consolidation Engine

Idle-time memory consolidation in production โ€” inspired by hippocampal replay (Stickgold & Walker 2013). Weak connections are pruned, stable ones strengthened.

We are not aware of any competing memory system shipping sleep-time consolidation; the closest published designs remain proposals.

7-Layer Memory Coordinator

Unified orchestrator for 7 memory layers in production โ€” from working memory to long-term storage. Based on Global Workspace Theory (Baars 1988).

Mem0 ships 2 layers, Letta 3, Zep 2. ZenBrain ships 7 โ€” among the deepest memory architectures in open source today.

A-RAG (Autonomous Retrieval Agent)

Meta-agent that creates retrieval plans before any search is executed. Heuristic-first with LLM fallback, max 4 dependent steps.

Heuristic-based planning before retrieval โ€” no LLM cost for simple queries; few production memory systems expose a dedicated planning layer.

Multi-Agent Debate Protocol

Structured 3-round debate protocol when agents disagree. Challenge โ†’ Response โ†’ Resolution with automatic escalation.

Structured 3-round debate protocol โ€” rare in production multi-agent systems.

Curiosity Engine

Automatic knowledge gap detection with quantified gap score. Analyzes query history, fact density, and confidence โ€” recommends targeted actions.

Few memory systems implement systematic curiosity-driven gap detection. Inspired by Loewenstein's information gap theory (1994).

Prediction Engine

User intent prediction from temporal and sequential patterns. Learns from prediction errors โ€” the more often wrong, the better the next prediction.

We do not see competitors predicting user intent from behavioral patterns within the memory layer itself.

HyperAgent L0โ€“L2

3-level recursive self-improvement with formal safety bounds. Level 0 optimizes knowledge, Level 1 optimizes Level-0 strategies, Level 2 optimizes Level-1 parameters.

3-level recursion with immutable core properties, daily budgets, and automatic rollback on quality regression โ€” rare in production.

Cross-Context Entity Merging

Detection and merging of entities across 4 isolated contexts (Operations, Finance, People, Strategy). Bayesian confidence updates on conflicts.

We do not see competing memory systems managing entity identity across isolated contexts.

HiMeS โ€” 7 Layers of Neuroscience-Inspired Memory

Inspired by the Atkinson-Shiffrin model (1968) and modern cognitive science.

Visualization of the HiMeS memory architecture โ€” seven distinct memory layers active in a brain cross-section, rendered in deep teal with amber focal points
Seven differentiated memory layers operating as a coherent system โ€” modeled on the Atkinson-Shiffrin framework and contemporary cognitive neuroscience.
1

Working Memory

Active focus โ€” 7ยฑ2 items per Miller's Magical Number. Fastest access, shortest lifespan.

Phase 125
2

Short-Term Memory

Session context and conversation continuity. Survives the current session.

Phase 125
3

Episodic

Concrete experiences with emotional tagging. 400+ keyword lexicon (DE+EN) for arousal/valence scoring.

Phase 125
4

Semantic

Factual knowledge with FSRS scheduling. Spaced repetition optimizes recall timing โ€” 30% better than SM-2.

Phase 125
5

Procedural

Workflows and skills. Tool chains are analyzed and optimized.

Phase 127
6

Core Memory

Immutable foundations following the Letta pattern. Pinned facts that are never forgotten.

Phase 126
7

Long-Term Memory

Persistent knowledge with Ebbinghaus forgetting curve, Hebbian reinforcement, and Bayesian confidence.

Phase 125

โ€œAny AI that cannot forget will eventually drown in its own noise. Selective forgetting is not a weakness โ€” it is the foundation of intelligence.โ€

โ€” Alexander Bering

15 Algorithms Inspired by Neuroscience

FSRS Spaced Repetition

open-spaced-repetition/fsrs4anki

Optimal review timing at ~90% target retention

Ebbinghaus Forgetting Curve

Ebbinghaus (1885)

Exponential decay with configurable half-life

Hebbian Learning

Hebb (1949)

Co-activated facts strengthen connections (ร—1.09/activation)

Homeostatic Normalization

Turrigiano (2004)

Prevents runaway growth of edge weights

Sleep Consolidation

Stickgold & Walker (2013)

Hippocampal replay with +50% stability boost

Synaptic Homeostasis

Tononi & Cirelli (2006)

Weak connections pruned during sleep

Emotional Modulation

LeDoux (1996)

Emotional memories decay 2.7ร— slower

Bayesian Propagation

Pearl (1988)

Confidence updates across the entire knowledge graph

Global Workspace Theory

Baars (1988)

Conscious access through competitive context assembly

Information Gain Scoring

Shannon (1948)

Entropy-based prioritization of new facts

Knowledge Gap Theory

Loewenstein (1994)

Systematic detection of missing knowledge

Working Memory Capacity

Miller (1956)

7ยฑ2 active items in working memory

Neuromodulation (PMA)

Schultz (1997) ยท Aston-Jones (2005)

Dopamine, NE, 5-HT, ACh โ€” four channels with tonic + phasic dynamics

Reconsolidation (PMA)

Nader (2000) ยท Schiller (2010)

Memory destabilizes on retrieval โ€” four PE-gated update modes with rollback

Triple-Copy Memory (PMA)

Squire & Bayley (2007)

Three traces with divergent dynamics: fast (4h), medium (14d), deep (logarithmic)

Intelligent Knowledge Retrieval

6 strategies, dynamically selected per query. Not one pipeline โ€” an adaptive system.

End-to-end RAG pipeline

01Query02A-RAG Plan03HyDE04Vector Search05Cross-Encoder06Confidence07Answer

A-RAG Planning

Meta-agent plans retrieval steps before execution. Heuristic-first, LLM fallback.

GraphRAG 3-Layer

Event subgraph + semantic graph + community summaries. 5 parallel strategies.

Self-RAG Critique

Automatic reformulation at confidence < 0.5. 4-component scoring.

HyDE Retrieval

Hypothetical answer โ†’ embedding โ†’ search. Auto-detection with 5s timeout.

Contextual Retrieval

Chunk enrichment per Anthropic method. +67% retrieval accuracy.

Embedding Drift Detection

BullMQ worker monitors drift >10%. Automatic cache invalidation.

Autonomous Agents with Safety Bounds

Multi-agent orchestration with structured debate, dynamic team composition, and recursive self-improvement.

Multi-agent team architecture

HyperAgent L0โ€“L2recursive self-improvementDebate Protocol3 rounds: challenge โ†’ response โ†’ resolveOrchestratorPersistent Loopspause ยท resume ยท cancelResearcherWriterReviewerCoder

Debate Protocol

3-round debate on disagreement. Challenge โ†’ Response โ†’ Resolution.

Dynamic Team Builder

5 specialist agents, automatically composed by task type.

HyperAgent L0โ€“L2

Recursive self-improvement with daily budgets, sandbox tests, and auto-rollback.

Persistent Agent Loops

Pause/resume with state checkpointing. Long-running tasks over days.

A2A Protocol

Agent-to-agent communication per Google standard. /.well-known/agent.json discovery.

Implicit Feedback

Automatic behavior detection without explicit labeling.

AI That Thinks About Thinking

Curiosity, prediction, metacognition โ€” three pillars of cognitive intelligence.

Curiosity Engine

Quantified gap score: query frequency ร— fact density ร— confidence ร— RAG quality.

Prediction Engine

Temporal + sequential pattern recognition. Learns from prediction errors.

Metacognition

Confidence calibration, confusion detection, capability profiling.

Adaptive Thinking

4-tier thinking budgets: 1Kโ†’16Kโ†’64Kโ†’128K tokens. Auto-detection. 60-80% cost savings.

91.3% accuracy at 1/106แต—สฐ the tokens

On LongMemEval-500, ZenBrain reaches 91.3% of long-context-oracle accuracy at a per-query token budget of 1/106 โ€” the Pareto position for AI memory.

Pareto frontier curve โ€” accuracy plotted against token cost. A bright amber peak marks the sweet spot where ZenBrain reaches 91% of oracle accuracy at 1ร— the cost.
Bonferroni-corrected p โ‰ค 6.2 ร— 10โปยณยน across three independent LLM judges. ZenBrain wins 12 of 12 head-to-head comparisons against Letta, Mem0, and A-Mem.

ZenBrain vs. Competitors

Factual capability comparison โ€” as of March 2026.

FeatureZenBrainMem0LettaZepLangChain
Memory Layers72321
Spaced Repetition (FSRS)โœ“โ€”โ€”โ€”โ€”
Emotional Memoryโœ“โ€”โ€”โ€”โ€”
Hebbian KG Edgesโœ“โ€”โ€”โ€”โ€”
Sleep Consolidationโœ“โ€”โ€”โ€”โ€”
Bayesian Confidenceโœ“โ€”โ€”โ€”โ€”
Graph Reasoningโœ“โ€”โ€”โ€”โ€”
3-Layer GraphRAGโœ“โ€”โ€”โ€”โ€”
Agentic RAGโœ“โ€”โ€”โ€”โ€”
Debate Protocolโœ“โ€”โ€”โ€”โ€”
Curiosity Engineโœ“โ€”โ€”โ€”โ€”
Prediction Engineโœ“โ€”โ€”โ€”โ€”
HyperAgent L0โ€“L2โœ“โ€”โ€”โ€”โ€”
Multi-Context Isolationโœ“โ€”โ€”โ€”โ€”
Open Sourceโœ“โ€”โœ“โœ“โœ“
Tests11,589โ€”โ€”โ€”โ€”

Explore the Code

ZenBrain is open source. All algorithms, all tests, all documentation โ€” openly available.