The Science Behind ZenBrain
40 technical innovations. 8 globally unique. 12 neuroscience-inspired algorithms. 9,228 tests. All openly documented.
“Memory is not storage — it is a living process of forgetting, consolidation, and rediscovery. We translated this process into software.”
What exists nowhere else
These eight technologies exist in no other production AI system — neither commercial nor open source.
Sleep Consolidation Engine
First production AI with idle-time memory consolidation — inspired by hippocampal replay (Stickgold & Walker 2013). Weak connections are pruned, stable ones strengthened.
No competitor implements sleep-time consolidation. Unique across the entire AI industry.
7-Layer Memory Coordinator
First unified orchestrator for 7 memory layers in production — from working memory to long-term storage. Based on Global Workspace Theory (Baars 1988).
Mem0 has 2 layers, Letta 3, Zep 2. ZenBrain has 7 — the deepest memory architecture in open source.
A-RAG (Autonomous Retrieval Agent)
Meta-agent that creates retrieval plans before any search is executed. Heuristic-first with LLM fallback, max 4 dependent steps.
First planning layer for multi-step retrieval. Heuristic-based — no LLM costs for simple queries.
Multi-Agent Debate Protocol
Structured 3-round debate protocol when agents disagree. Challenge → Response → Resolution with automatic escalation.
First structured debate protocol for multi-agent systems in production.
Curiosity Engine
Automatic knowledge gap detection with quantified gap score. Analyzes query history, fact density, and confidence — recommends targeted actions.
No competitor implements systematic curiosity. Inspired by Loewenstein's information gap theory (1994).
Prediction Engine
User intent prediction from temporal and sequential patterns. Learns from prediction errors — the more often wrong, the better the next prediction.
No competitor predicts user intentions from behavioral patterns.
HyperAgent L0–L2
3-level recursive self-improvement with formal safety bounds. Level 0 optimizes knowledge, Level 1 optimizes Level-0 strategies, Level 2 optimizes Level-1 parameters.
Unique 3-level recursion with immutable core properties, daily budgets, and automatic rollback on quality regression.
Cross-Context Entity Merging
Detection and merging of entities across 4 isolated contexts (Personal, Work, Learning, Creative). Bayesian confidence updates on conflicts.
No competitor manages cross-context entities.
HiMeS — 7 Layers of Neuroscience-Inspired Memory
Inspired by the Atkinson-Shiffrin model (1968) and modern cognitive science.
Working Memory
Active focus — 7±2 items per Miller's Magical Number. Fastest access, shortest lifespan.
Phase 125Short-Term Memory
Session context and conversation continuity. Survives the current session.
Phase 125Episodic
Concrete experiences with emotional tagging. 400+ keyword lexicon (DE+EN) for arousal/valence scoring.
Phase 125Semantic
Factual knowledge with FSRS scheduling. Spaced repetition optimizes recall timing — 30% better than SM-2.
Phase 125Procedural
Workflows and skills. Tool chains are analyzed and optimized.
Phase 127Core Memory
Immutable foundations following the Letta pattern. Pinned facts that are never forgotten.
Phase 126Long-Term Memory
Persistent knowledge with Ebbinghaus forgetting curve, Hebbian reinforcement, and Bayesian confidence.
Phase 125“Any AI that cannot forget will eventually drown in its own noise. Selective forgetting is not a weakness — it is the foundation of intelligence.”
12 Algorithms Inspired by Neuroscience
FSRS Spaced Repetition
open-spaced-repetition/fsrs4ankiOptimal review timing at ~50% retention
Ebbinghaus Forgetting Curve
Ebbinghaus (1885)Exponential decay with configurable half-life
Hebbian Learning
Hebb (1949)Co-activated facts strengthen connections (×1.09/activation)
Homeostatic Normalization
Turrigiano (2004)Prevents runaway growth of edge weights
Sleep Consolidation
Stickgold & Walker (2013)Hippocampal replay with +50% stability boost
Synaptic Homeostasis
Tononi & Cirelli (2006)Weak connections pruned during sleep
Emotional Modulation
LeDoux (1996)Emotional memories decay 2.7× slower
Bayesian Propagation
Pearl (1988)Confidence updates across the entire knowledge graph
Global Workspace Theory
Baars (1988)Conscious access through competitive context assembly
Information Gain Scoring
Shannon (1948)Entropy-based prioritization of new facts
Knowledge Gap Theory
Loewenstein (1994)Systematic detection of missing knowledge
Working Memory Capacity
Miller (1956)7±2 active items in working memory
Intelligent Knowledge Retrieval
6 strategies, dynamically selected per query. Not one pipeline — an adaptive system.
A-RAG Planning
Meta-agent plans retrieval steps before execution. Heuristic-first, LLM fallback.
GraphRAG 3-Layer
Event subgraph + semantic graph + community summaries. 5 parallel strategies.
Self-RAG Critique
Automatic reformulation at confidence < 0.5. 4-component scoring.
HyDE Retrieval
Hypothetical answer → embedding → search. Auto-detection with 5s timeout.
Contextual Retrieval
Chunk enrichment per Anthropic method. +67% retrieval accuracy.
Embedding Drift Detection
BullMQ worker monitors drift >10%. Automatic cache invalidation.
Autonomous Agents with Safety Bounds
Multi-agent orchestration with structured debate, dynamic team composition, and recursive self-improvement.
Debate Protocol
3-round debate on disagreement. Challenge → Response → Resolution.
Dynamic Team Builder
5 specialist agents, automatically composed by task type.
HyperAgent L0–L2
Recursive self-improvement with daily budgets, sandbox tests, and auto-rollback.
Persistent Agent Loops
Pause/resume with state checkpointing. Long-running tasks over days.
A2A Protocol
Agent-to-agent communication per Google standard. /.well-known/agent.json discovery.
Implicit Feedback
Automatic behavior detection without explicit labeling.
AI That Thinks About Thinking
Curiosity, prediction, metacognition — three pillars of cognitive intelligence.
Curiosity Engine
Quantified gap score: query frequency × fact density × confidence × RAG quality.
Prediction Engine
Temporal + sequential pattern recognition. Learns from prediction errors.
Metacognition
Confidence calibration, confusion detection, capability profiling.
Adaptive Thinking
4-tier thinking budgets: 1K→16K→64K→128K tokens. Auto-detection. 60-80% cost savings.
ZenBrain vs. Competitors
Factual capability comparison — as of March 2026.
| Feature | ZenBrain | Mem0 | Letta | Zep | LangChain |
|---|---|---|---|---|---|
| Memory Layers | 7 | 2 | 3 | 2 | 1 |
| Spaced Repetition (FSRS) | ✓ | — | — | — | — |
| Emotional Memory | ✓ | — | — | — | — |
| Hebbian KG Edges | ✓ | — | — | — | — |
| Sleep Consolidation | ✓ | — | — | — | — |
| Bayesian Confidence | ✓ | — | — | — | — |
| Graph Reasoning | ✓ | — | — | — | — |
| 3-Layer GraphRAG | ✓ | — | — | — | — |
| Agentic RAG | ✓ | — | — | — | — |
| Debate Protocol | ✓ | — | — | — | — |
| Curiosity Engine | ✓ | — | — | — | — |
| Prediction Engine | ✓ | — | — | — | — |
| HyperAgent L0–L2 | ✓ | — | — | — | — |
| Multi-Context Isolation | ✓ | — | — | — | — |
| Open Source | ✓ | — | ✓ | ✓ | ✓ |
| Tests | 9,228 | — | — | — | — |
Explore the Code
ZenBrain is open source. All algorithms, all tests, all documentation — openly available.