In March 2026 we had 9 foundational algorithms — vmPFC-FSRS, Two-Factor Synaptic, Sim-Selection Sleep, Bayesian Confidence, and five others. They structured what gets stored, where, and with what importance.
What they did not regulate: when a memory becomes labile, which update strategy a retrieval triggers, how the system reacts under bias pressure, and which memory deserves protection from overwriting right now.
That is the job of the Predictive Memory Architecture (PMA) — a second wave of six components formally described in the v6 ZenBrain paper. They sit on top of the 9 foundational algorithms and govern the lifecycle of memory over time.
Here is a compact explanation of the six.
1. NeuromodulatorEngine — four channels, tonic + phasic
The brain has no global learning rate. It has a whole toolkit of neuromodulators that act differently in different places depending on the situation. The NeuromodulatorEngine emulates four of them:
- Dopamine (DA) — VTA, exploration and novelty bias
- Norepinephrine (NE) — Locus Coeruleus, learning rate and attention to prediction error
- Serotonin (5-HT) — Raphe, consolidation patience
- Acetylcholine (ACh) — Basal Forebrain, attention ratio between old and new information
Each channel has a tonic baseline (b = 0.5) with homeostatic drift (τ_decay = 0.95) and phasic bursts on events (5-minute half-life). DA and 5-HT are linked by an opposition coupling (−0.3) — a realization of the serotonin-dopamine balance well-documented in reward processing.
The engine outputs four modulation parameters consumed by other PMA components: learning rate (NE-driven), exploration bias (DA-driven), consolidation patience (5-HT-driven), attention ratio (ACh-driven).
In the paper experiments the engine produces a mean tonic drift of 0.469 (6.2 % from baseline) over 1,000 simulated events — homeostatic stability is preserved — and a DA/5-HT correlation of −0.130 (p < 0.01), which validates the opposition coupling.
2. ReconsolidationEngine — four update modes, not one
The classical view of memory: stored once, then read, then maybe overwritten. The neuroscience reality is subtler. When a memory is retrieved, it becomes labile for about 10 minutes — it can be modified or erased — and then re-stabilizes in a possibly altered form. This is reconsolidation (Nader 2000, Schiller 2010).
The ReconsolidationEngine implements this with four modes, gated by the effective prediction error PE_eff:
mode(PE_eff) =
confirmed if PE_eff < 0.1
selective_edit if 0.1 ≤ PE_eff < 0.3
integration if 0.3 ≤ PE_eff < 0.7
new_episode if PE_eff ≥ 0.7
The effective PE is neuromodulator-gated: PE_eff = PE_raw · (1 + 0.3·NE − 0.2·5HT). High NE (stress, attention) amplifies PE; high 5-HT (patience, stability) dampens it.
Every reconsolidation event is logged with an original snapshot — rollback is possible if the update later turns out to be a mistake. That is a safety property no competing memory system has. In the paper experiments the PE-to-update-mode classification reaches ≥ 95 % accuracy with correct contradiction detection in 100 % of test cases (precision = 1.0).
3. TripleCopyMemory — three traces, three time constants
Probably the most surprising component. Instead of storing a memory once, TripleCopyMemory stores it three times — with divergent decay dynamics:
S_fast(t) = S₀ · e^(−t/τ_f), τ_f = 4 h
S_med(t) = 0.8 · S₀ · e^(−t/τ_m), τ_m = 14 d
S_deep(t) = S₀ · log(1 + t/τ_d), τ_d = 7 d
- The fast copy delivers vivid immediate access that fades within hours.
- The medium copy persists across sessions with standard exponential decay.
- The deep copy grows logarithmically — it encodes the compressed essence and gets stronger over time.
The effective strength is S(t) = max(S_fast, S_med, S_deep). That produces a characteristic dominance transition: fast wins in the first hours, medium takes over at 1–3 days, deep dominates from 7+ days. At 30 days the composite strength retains 91.2 %, while pure Ebbinghaus baselines reach near-zero.
This matches the systems consolidation theory in neuroscience: detail-rich episodes fade, compressed gist representations survive.
4. PriorityMap — four-dimensional, with amygdala fast-path
Memory without prioritization gets buried in noise. The PriorityMap computes a four-dimensional score:
P = w_s · s + w_e · |v| + w_r · r + w_g · g
with saliency s, emotional valence v, reward relevance r, goal alignment g, and default weights (w_s, w_e, w_r, w_g) = (0.2, 0.25, 0.25, 0.3).
The decisive mechanism is the amygdala fast-path: at emotional intensity |v| > 0.6 the system guarantees P ≥ 0.5, regardless of the other dimensions. This mirrors McGaugh's (2004) finding that emotionally charged memories receive preferred consolidation — even when not task-relevant.
The weights are dynamically adjusted by neuromodulator state: DA amplifies saliency, NE amplifies emotion, ACh amplifies reward, 5-HT amplifies goal alignment. The same PriorityMap scores can produce different values in different states, reproducing biological behavior.
On a synthetic benchmark with 50 items and ground-truth importance the PriorityMap reaches NDCG@10 = 0.997 (vs. 0.680 chronological, +46.6 %).
5. StabilityProtector — protection for mature memories
Reconsolidation is powerful — but it also opens an attack window. What if a single false statement could overwrite a memory established for years?
The StabilityProtector prevents this with a lock score L and a rigidity factor ρ:
L = 0.3 · log₂(1+a) / log₂(11)
+ 0.3 · c
+ 0.2 · min(d/365, 1)
+ 0.2 · is_core
ρ = 1 + 0.1 · log₂(1+d)
update ⇔ PE ≥ 0.5 + 0.3 · L · ρ
with access count a, confidence c, age in days d, indicator is_core for core memory. Mature memories (a large, c large, d large, or core status) get high lock scores and rigidity factors — an update demand with low PE is rejected. Genuinely new information with high PE penetrates the protection.
This is analogous to NogoA receptor signaling and HDAC3 epigenetic regulation: the biological brain actively protects mature circuits against casual rewriting.
6. MetacognitiveMonitor — when the system notices it's biased
The final component watches the system itself. It tracks three bias types:
- Confirmation bias — asymmetric acceptance of positive vs. negative evidence
- Recency bias — overweighting recent memories
- Retrieval asymmetry — certain domains systematically under-retrieved
It opens novelty windows (10 minutes) after high-PE events (> 0.7) where encoding is temporarily boosted — and generates calibration alerts when systematic biases exceed a 30-day threshold. Efficiency tracking over the rolling 30-day window produces badges that surface in the UI — closing the feedback loop.
In the paper experiments confirmation-bias detection reaches precision = 0.832 and recall = 0.975 across 50 synthetic scenarios per seed. Urgency keyword detection produces 0 false negatives on German and English test phrases.
How the six work together
PMA is not a stack of six independent modules. It is a coupled system:
- The NeuromodulatorEngine delivers the global tonic values all others read.
- The ReconsolidationEngine relies on NE / 5-HT values to modulate
PE_eff. - The PriorityMap reads the same tonic values to dynamically adjust weights.
- The StabilityProtector becomes stricter when 5-HT is high (more patience) and more permissive when NE is high (more learning readiness).
- The MetacognitiveMonitor observes all four and triggers novelty windows that briefly boost learning rate and encoding strength.
- TripleCopyMemory is the only component that runs independently — it is the substrate layer the other five operate on.
The coupling shows up empirically in the paper's ablation: under moderate conditions (decay = 0.15/day, 45 days) all 6 PMA components are cooperatively redundant (ΔQ ≤ 0.1 % when removed individually). But removing all six at once — testing "NeurIPS-only" against "Full" — collapses the system by −67.5 %. PMA is a resilience backbone, not an optional add-on.
Under stress (60 days, decay = 0.25/day) the coupling becomes visible: NeuromodulatorEngine and TripleCopyMemory turn individually critical (−83.0 % and −93.7 % when removed), while iMAD, MetacogMonitor, and PriorityMap deliver their contribution in ranking precision rather than retention rate.
What this means in practice
The 9 foundational algorithms were architecture: what to store, where, how to structure. The 6 PMA components are dynamics: when to become labile, which update mode, when to protect, when to boost.
You need both. Either one without the other collapses.
Read more
- Predecessor post: How we built memory into an AI OS — the 9 neuroscience algorithms
- Pareto position: 91 % of the accuracy at 1 % of the tokens
- Paper: ZenBrain v6 on Zenodo
- Code: github.com/zensation-ai/zenbrain