Memory that
thinks like a brain.
Persistent memory for Claude Code. 23 neuroscience-inspired mechanisms, WRRF retrieval fusion on PostgreSQL + pgvector, 11 specialized agents with scoped memory. 98% recall. No GPU.
Requires Python 3.10+ and PostgreSQL 15+ with pgvector + pg_trgm.
State-of-the-art recall. Proven.
Tested against published academic benchmarks with retrieval-only metrics — no LLM reader in the evaluation loop.
| Benchmark | Score | Notes |
|---|---|---|
| LongMemEval (ICLR 2025) | 98.0% R@10, 0.880 MRR | 500 questions, 115K tokens — +19.6pp vs paper best |
| LoCoMo (ACL 2024) | 88.9% R@10, 0.774 MRR | 1,986 questions, 10 conversations |
| BEAM (ICLR 2026) | 0.515 MRR | Multi-session, 355 questions — +57% vs LIGHT baseline |
All benchmarks use retrieval-only metrics. High retrieval MRR guarantees high QA regardless of downstream model.
Neuroscience meets information retrieval.
Every memory operation mirrors how biological neurons encode, consolidate, and recall information.
Write Gate
Hierarchical predictive coding with neuromodulation and emotional tagging. Rejects duplicates and noise.
Encoding
pgvector embeddings + FTS + entity extraction + emotional tagging + synaptic tagging.
Consolidation
LABILE → EARLY_LTP → LATE_LTP → CONSOLIDATED. LTP/LTD, STDP, CLS, sleep compute.
Retrieval
WRRF fusion in PL/pgSQL + FlashRank cross-encoder reranking + surprise momentum.
11 agents. One shared brain.
Each agent recalls context before working and remembers insights after. Soft-scoped memory preserves cross-topic retrieval.
Every agent has domain-specific memory with soft topic filtering. Validated at −0.001 MRR delta — scoping adds focus without sacrificing cross-topic retrieval. The Orchestrator spawns specialized agents, each pre-loaded with relevant memories and cognitive context from JARVIS.
23 plasticity mechanisms. Zero GPU.
Every mechanism from computational neuroscience, implemented as pure server-side inference on PostgreSQL.
WRRF Retrieval Fusion
Vector similarity, full-text search, trigram matching, heat decay, temporal proximity, entity density, emotional resonance, access frequency, and consolidation state — fused server-side in PL/pgSQL.
Surprise Momentum
Test-time learning from Titans (NeurIPS 2025). Retrieval surprise modulates memory heat via EMA, boosting LongMemEval R@10 from 90.4% to 98.0% (+7.6pp).
LTP / LTD / STDP
Long-term potentiation strengthens accessed memories. Long-term depression weakens neglected ones. Spike-timing-dependent plasticity adjusts Hebbian connection weights.
Coupled Neuromodulation
Dopamine, norepinephrine, acetylcholine, serotonin — with cross-channel coupling (Doya 2002, Schultz 1997). Modulates encoding strength and retrieval priority.
Microglial Pruning
Stale memories pruned during consolidation. Homeostatic plasticity and adaptive decay preserve important facts while cleaning noise.
Knowledge Graph
Causal discovery builds a directed graph with Hebbian weights, facilitation/depression, and release probability. Navigate via Successor Representation BFS.
Cognitive Profiling
JARVIS extracts your 12D reasoning signature — thinking style, entry patterns, blind spots, cross-domain bridges — and pre-loads it every session via EMA updates.
Sleep Compute
Dream replay, interference resolution, CLS (episodic-to-semantic transfer), and engram competition. Background consolidation runs at session end.
Neural Visualization
Interactive force-directed graph with 6-level hierarchy. Node size encodes importance, glow encodes heat, quality arcs show reliability. Real-time memory heatmap dashboard.
34 MCP tools. Organized by function.
From simple remember/recall to autonomous pipeline execution. All via natural language in Claude Code.
Store & Retrieve
- remember
- recall
- recall_hierarchical
- consolidate
- checkpoint
- forget
- anchor
- rate_memory
- validate_memory
Graph & Profiling
- navigate_memory
- get_causal_chain
- detect_gaps
- drill_down
- detect_domain
- explore_features
- query_methodology
- memory_stats
Pipeline & Automation
- run_pipeline
- narrative
- get_project_story
- assess_coverage
- create_trigger
- add_rule
- sync_instructions
- seed_project
- open_visualization
103 core modules. Zero I/O in business logic.
Clean Architecture with strict layer separation. All retrieval runs server-side in PL/pgSQL stored procedures.
| Layer | Responsibility | Key Detail |
|---|---|---|
| Transport | MCP Server | 34 tool endpoints via FastMCP + Pydantic |
| Handlers | Composition roots | 60 handlers wire infrastructure to core |
| Core | Pure business logic | 103 modules, zero I/O, fully testable |
| Infrastructure | PostgreSQL + pgvector | PL/pgSQL stored procedures, server-side WRRF |
| Write Gate | Predictive coding | 3-level hierarchical free energy gate |
| Retrieval | WRRF fusion | Vector + FTS + trigram + heat + recency + more |
| Reranking | Cross-encoder | FlashRank (no GPU) |
| Consolidation | Lifecycle | Decay, LTP/LTD, STDP, pruning, CLS, sleep compute |
| Knowledge Graph | Causal discovery | Hebbian weights, SR traversal, fractal clustering |
| Methodology | Cognitive profiling | 12D persona vector, EMA updates per session |
| Shared | Utilities | 11 pure-function modules, Pydantic types, linear algebra |
Install in one command.
Plugin install gives you everything — MCP server, 4 lifecycle hooks, and 10 skills. Or go minimal with CLI.
Plugin (recommended)
> claude plugin install cortex
Registers MCP server, installs SessionStart/SessionEnd/PostToolUse/Compaction hooks, and activates 10 workflow skills. Restart Claude Code and you're running.
CLI (MCP server only)
MCP server only. No hooks, no skills, no auto-capture.
Database setup
$ brew install postgresql@17 pgvector
$ brew services start postgresql@17
$ createdb cortex
$ psql -d cortex -c "CREATE EXTENSION IF NOT EXISTS vector; CREATE EXTENSION IF NOT EXISTS pg_trgm;"
$ export DATABASE_URL=postgresql://localhost:5432/cortex
What gets installed
| MCP Server | 34 tools for memory, retrieval, profiling, navigation |
| SessionStart hook | Injects hot memories + cognitive profile at session start |
| SessionEnd hook | Updates your cognitive profile after each session |
| PostToolUse hook | Auto-captures important tool outputs as memories |
| Compaction hook | Saves checkpoint before context window compaction |
| 10 Skills | Workflow guides for every tool (invoke via /cortex-*) |
Free & Open Source
MIT licensed. 1,906 tests. 103 modules. 11 agents. Give your agent a brain.