#17129: feat(memory): compaction-aware conversation memory with smart-trim
size: XL
## Summary
Adds a **memory-context** plugin that gives agents long-term conversation memory across compaction cycles. When the context window compresses old messages, memory-context archives them into a warm store and recalls relevant context on-demand via hybrid search (BM25 + vector).
> **This plugin is disabled by default** and has zero impact on existing functionality. To enable it, add the following to `~/.openclaw/openclaw.json`:
>
> ```json
> {
> "plugins": {
> "entries": { "memory-context": { "config": {} } },
> "slots": { "memory": "memory-context" }
> }
> }
> ```
>
> Both `entries` and `slots` must be present — if either is missing the plugin stays inactive.
## Why This Feature
Without long-term memory, agents lose all conversational context once the context window compresses old messages. Users have to repeatedly re-explain background, preferences, and prior decisions. This plugin solves that by:
- **Preserving context across compaction** — archived segments survive context-window truncation and are searchable indefinitely
- **Automatic recall** — relevant history is injected before each response, so agents "remember" prior conversations without user effort
- **Noise-free storage** — heartbeat pings, system prefixes, and junk messages are filtered out, keeping recall clean and relevant
- **Zero-config local mode** — works out of the box with a local embedding model; optionally upgrades to Gemini for higher quality semantic search
- **Non-blocking** — store loads asynchronously and never delays agent startup
## Architecture
```
User message → recall (hybrid BM25+vector search) → inject recalled-context block
Compaction event → archive messages → extract knowledge facts
```
### Core Components
- **WarmStore** — segment storage with BM25 + vector hybrid search, time-decay, MMR re-ranking
- **ColdStore** — JSONL streaming persistence
- **KnowledgeStore** — IDF-weighted fact search
- **Smart-Trim** — context-event handler: trims old messages, archives, injects recall
- **BM25** — full text search with CJK bigram tokenization
- **Vector Index** — in-memory HNSW-like vector search with pluggable embeddings (Gemini / hash fallback)
- **Compaction Bridge** — archives messages + async knowledge extraction
### Key Features
- **Hybrid search** — BM25 + vector cosine with configurable alpha blending
- **CJK bigram tokenization** — proper Chinese/Japanese/Korean text handling
- **Window recall** — ±2 timeline neighbor expansion for context continuity
- **Noise segment filter** — automatically skips heartbeat, no-reply, raw audio, queued messages, and ultra-short content
- **Channel prefix stripping** — removes Feishu/WeChat system prefixes before archiving
- **Auto embedding model** — defaults to `auto` (Gemini → hash fallback); handles dimension mismatch with background re-embedding
- **Non-blocking init** — store loads asynchronously, never blocks agent startup
- **Eviction & deduplication** — configurable max-age eviction, cross-compaction dedup
- **Redaction** — optional PII stripping
## Tests
11 test files, 90 tests — unit, integration, and E2E (compaction → archive → recall lifecycle, persistence across restart).
## Production Validation
Deployed on live Feishu channel. Noise filter removes ~12% junk segments. Gemini embedding (dim=3072) with automatic background re-embedding on model switch. Zero data loss across multiple gateway restarts.
## Breaking Changes
None. New plugin — disabled by default, no existing code modified.
Most Similar PRs
#19818: docs: add IC Sovereign Persistent Memory to community plugins
by TheAmSpeed · 2026-02-18
71.8%
#22518: feat(extensions): add chromadb-memory plugin with multi-agent colle...
by ta3pks · 2026-02-21
70.8%
#8795: feat(memory): add Redis-backed long-term memory plugin
by tf-gmail · 2026-02-04
70.2%
#7480: feat: Add CoreMemories hierarchical memory system
by Itslouisbaby · 2026-02-02
70.2%
#20184: feat: memory plugin compaction control
by solstead · 2026-02-18
69.7%
#6060: feat(onboarding): add Memory Optimization step to onboarding wizard
by GodsBoy · 2026-02-01
69.5%
#20791: Feature/aeon memory plugin
by mustafarslan · 2026-02-19
68.3%
#22201: feature(context): extend plugin system to support custom context ma...
by jalehman · 2026-02-20
68.3%
#21855: feat: add memory-bank skill — persistent file-based context
by winstonkoh87 · 2026-02-20
67.3%
#9816: core-memories: ingest chat into Flash + capture assistant replies
by Itslouisbaby · 2026-02-05
67.1%