[P] Applying the Ebbinghaus forgetting curve to AI agent retrieval — a biologically-inspired memory system
Most retrieval systems for AI agents treat all indexed content as equally available regardless of age, access frequency, or contextual importance. This doesn’t reflect how effective memory systems actually work.
I built claude-memory, an open-source Python package that layers a biological memory model on top of hybrid retrieval (vector similarity via ChromaDB + BM25 keyword scoring). Five mechanisms from cognitive science re-rank retrieval results:
- Temporal decay modeled on the Ebbinghaus forgetting curve — relevance scores decay as a function of time since last access
- Evergreen exemptions — designated critical documents excluded from decay (analogous to highly-consolidated long-term memories)
- Salience weighting — metadata-driven importance signals modulate decay rate
- Retrieval strengthening — each access event boosts a document’s score, modeling the testing effect
- Consolidation bonus — documents referenced in periodic summary notes receive reinforcement, analogous to memory consolidation during review
The system includes a delta-sync indexer (SHA-256 for incremental updates) and a periodic notes generator that feeds back into the consolidation mechanism.
125 tests passing, MIT license. Interested in feedback on the decay model parameterization and whether the Ebbinghaus curve is the right choice versus alternative forgetting functions.
submitted by /u/haustorium12
[link] [comments]