Apache 2.0TypeScriptZero Dependencies

Neuroscience Memory
for AI Agents.

The cognitive core extracted from ZenAI — tested, proven, open source. 15 algorithms (9 foundational + 6 PMA components) that teach your AI to remember, forget, and sleep.

Hebbian LearningFSRS Spaced RepetitionSleep ConsolidationBayesian ConfidenceEbbinghaus DecayEmotional Tagging7-Layer MemoryMemoryCoordinatorCross-Context
276Tests
15Algorithms
0Dependencies
DOIOpen Access
The Story

Extracted from 322,000 lines of production code.

ZenBrain is not an experiment. It's the core of ZenAI — an AI operating system with 11,589 tests and 7 cognitive pillars. We extracted the memory algorithms so you can use them.

ZenAI322K LOC11,589 TestsProduction AI OS
Extraction15 Algorithms7 LayersNeuroscience Core
ZenBrainnpm Packages276 TestsOpen Source Apache 2.0
Algorithms

15 Algorithms. 9 Foundational + 6 PMA Components.

ZenBrain ships 15 neuroscience-grounded algorithms (zero dependencies, open-access): 9 foundational mechanisms — FSRS, Hebbian, Sleep Consolidation, Bayesian Confidence, and others — plus 6 Predictive Memory Architecture (PMA) components that govern memory dynamics: NeuromodulatorEngine, ReconsolidationEngine, TripleCopyMemory, PriorityMap, StabilityProtector, MetacognitiveMonitor.

Cooperative survival network — 15 interconnected nodes in deep teal with cascading amber failure ripples illustrating how ZenBrain's algorithms support each other under stress
9 of 15 algorithms become individually critical under stress (decay = 0.25/day, 60-day aging). What appears redundant under normal conditions is in fact cooperatively redundant.

Hebbian Learning

Strengthen connections through co-activation

Hebb 1949

FSRS Spaced Repetition

Optimal review scheduling

Wozniak 2022

Sleep Consolidation

Memory replay during sleep — like the brain

Stickgold & Walker 2013

Ebbinghaus Decay

Forgetting curve with adaptive intervals

Ebbinghaus 1885

Bayesian Confidence

Belief updates with prior + evidence

Bayes 1763

Emotional Tagging

Emotional markers for better recall

Damasio 1994

Activation Spreading

Associative network activation

Collins & Loftus 1975

Temporal Context

Time-based memory encoding

Howard & Kahana 2002

Memory Consolidation

STM→LTM transfer with importance scoring

McClelland et al. 1995

Confidence Intervals

95% CI for all probabilistic outputs

Wilson 1927

Cross-Context Transfer

Share knowledge across contexts

Tulving 1972

Retention Visualization

Export Ebbinghaus curves as data points

Ebbinghaus 1885

Neuromodulation

Dopamine, NE, 5-HT, ACh — four channels with tonic + phasic dynamics

Schultz 1997 · Aston-Jones 2005

Reconsolidation

Memory becomes labile on retrieval — four PE-gated update modes with rollback

Nader 2000 · Schiller 2010

Triple-Copy Memory

Three traces with divergent dynamics — fast (4h), medium (14d), deep (logarithmic)

Squire & Bayley 2007
Orchestration

One Coordinator. 7 Memory Layers.

The MemoryCoordinator orchestrates all layers — from Working Memory to Sleep Consolidation. One call. The system decides.

Three parallel memory streams flowing left to right with divergent decay curves — fast (4 hours), medium (14 days), and deep (logarithmic growth) copies
MemoryCoordinator orchestrates all 7 layers. On top, the 6-component Predictive Memory Architecture (PMA) governs memory dynamics — TripleCopyMemory shown here is one of them.
Auto-Routingstore() auto-detects the correct layer
Cross-Layer RecallOne query, all layers simultaneously
ConsolidationSTM→LTM transfer with importance scoring
Decay ManagementEbbinghaus + FSRS automatically
Review QueueFSRS-based review scheduling
import { MemoryCoordinator }
  from '@zensation/core';

const memory =
  new MemoryCoordinator();

// Store — auto-routes
await memory.store(
  'TypeScript has generics',
  { context: 'learning' }
);

// Recall — cross-layer
const results =
  await memory.recall(
    'What do I know about TypeScript?'
  );

// Sleep — consolidate
await memory.consolidate();
Working
Short-Term
Episodic
Semantic
Procedural
Core
Cross-Context
Quick Start

First memory in 30 seconds.

No setup. No config. No account.

Install
npm install @zensation/algorithms
npm install @zensation/core
Write code
import { MemoryCoordinator } from '@zensation/core';
import { HebbianLearning, FSRS } from '@zensation/algorithms';

const memory = new MemoryCoordinator({
  algorithms: [HebbianLearning, FSRS]
});

await memory.store('Project deadline is Friday',
  { context: 'work' });

const recall = await memory.recall(
  'When is the deadline?'
);
console.log(recall);
Result
{
  "content": "Project deadline is Friday",
  "confidence": 0.94,
  "layer": "short-term",
  "decay": 0.87,
  "nextReview": "2026-03-28T09:00:00Z"
}
Comparison

What other memory systems don't have.

Four competitors. All solve parts of the problem. None has Sleep Consolidation, 7 layers, or Confidence Intervals.

Feature Comparison at a Glance

Memory LayersSleep Consoli…Spaced Repeti…Graph LearningSelf-HostedTypeScript
ZenBrain
Mem0
Letta
ZenBrainMem0LettaZepLangMem
Memory Layers72221
Sleep Consolidation
FSRS Spaced Rep.
Hebbian Learning
Confidence Intervals
Emotional Tagging
Cross-Context
Ebbinghaus Decay
Self-hosted
Zero Dependencies
TypeScript-nativePythonPythonPythonPython
Peer-reviewed Basis
Algorithms (open-source package)121321
Tests276????

As of April 2026. Based on public documentation. ZenBrain ships 15 algorithms total — 9 foundational mechanisms plus 6 Predictive Memory Architecture (PMA) components governing memory dynamics. Full description in the research paper.

Architecture

Two Packages. One System.

@zensation/algorithms is the pure core — zero dependencies. @zensation/core orchestrates everything with the MemoryCoordinator.

Your Application
uses
@zensation/core
MemoryCoordinatorstore() · recall() · consolidate() · decay()
Working
Short-Term
Episodic
Semantic
Procedural
Core
Cross-Context
npm i @zensation/core
uses
@zensation/algorithms
0 deps
Hebbian
FSRS
Sleep
Ebbinghaus
Bayesian
npm i @zensation/algorithms
optional
adapter-postgres+ pgvector · preview# published with v0.3
adapter-sqlitezero-config · preview# published with v0.3
Community

Open source means: build together.

Apache 2.0. Contributions welcome. No CLA required.

What's next.

v0.2.0current

MemoryCoordinator, Sleep Consolidation, 276 tests

March 2026
v0.3.0

@zensation/mcp-server — IDE integration (Cursor, VS Code, Claude)

May 2026
v0.4.0

LOCOMO benchmark vs. Mem0/Zep/LangMem

June 2026
v1.0.0

API stability, ZenBrain Cloud MVP

Q3 2026
FAQ

Frequently asked questions.

What's the difference between ZenBrain and ZenAI?

ZenBrain is the extracted, open-source core of the memory algorithms. ZenAI is the complete AI operating system built on ZenBrain. Want to build your own system → ZenBrain. Want the finished product → ZenAI.

Do I need a database?

No. @zensation/algorithms is zero-dependency and runs completely in-memory. Optional persistence adapters — adapter-postgres (pgvector) and adapter-sqlite (zero-config) — are in preview and will be published on npm in v0.3.

Which LLMs are supported?

ZenBrain is LLM-agnostic. The algorithms work on embeddings — whether from OpenAI, Anthropic, Mistral, Ollama, or your own model.

Is ZenBrain production-ready?

The algorithms come from ZenAI (322K LOC, 11,589 tests, months in production). The extracted package has 276 of its own tests. API stability (Semver 1.0) is planned for Q3 2026.

How is ZenBrain different from Mem0?

Mem0 has 2 memory layers and focuses on cloud API. ZenBrain has 7 layers, Sleep Consolidation, FSRS, Hebbian Learning, Confidence Intervals — all self-hosted, zero dependencies, TypeScript-native.

Can I use @zensation/algorithms without core?

Yes. The algorithms package is completely standalone with zero dependencies. You can import individual algorithms and integrate them into existing systems without using the MemoryCoordinator.

Ready for AI
that remembers?

15 algorithms. 276 tests. Zero dependencies. Apache 2.0 — forever.

v0.2.1TypeScriptApache 2.0