← Back to PRs

#20713: fix(compaction): trigger memory flush after missed compaction cycles

by zerone0x open 2026-02-19 07:38 View on GitHub →
size: S experienced-contributor
## Summary - **Problem:** `compaction.memoryFlush.enabled` never triggers before compaction because compaction resets `totalTokens` to a post-compaction value before the next turn boundary, and the 4 000-token soft-threshold window is too narrow to catch at turn start - **Why it matters:** All users with memory flush enabled silently lose conversation context on every compaction — the feature effectively does nothing - **What changed:** Reordered the check in `shouldRunMemoryFlush()` to detect un-flushed compaction cycles (via `compactionCount > memoryFlushCompactionCount`) and trigger the flush unconditionally — the post-compaction summary still holds key context worth persisting - **What did NOT change:** Proactive threshold-based flush for pre-first-compaction sessions still works identically; no changes to the compaction safeguard, session accounting, or memory file format ## Change Type (select all) - [x] Bug fix - [ ] Feature - [ ] Refactor - [ ] Docs - [ ] Security hardening - [ ] Chore/infra ## Scope (select all touched areas) - [ ] Gateway / orchestration - [ ] Skills / tool execution - [ ] Auth / tokens - [x] Memory / storage - [ ] Integrations - [ ] API / contracts - [ ] UI / DX - [ ] CI/CD / infra ## Linked Issue/PR - Closes #19488 ## User-visible / Behavior Changes Memory flush now reliably fires after compaction for sessions with `compaction.memoryFlush.enabled: true`. Previously the flush never triggered; now it runs once per compaction cycle and persists conversation context (from the compaction summary) to `memory/YYYY-MM-DD.md`. ## Security Impact (required) - New permissions/capabilities? `No` - Secrets/tokens handling changed? `No` - New/changed network calls? `No` - Command/tool execution surface changed? `No` - Data access scope changed? `No` ## Repro + Verification ### Environment - OS: Ubuntu 24.04 / macOS - Runtime: Node.js - Model/provider: Any Anthropic model - Relevant config: `agents.defaults.compaction.memoryFlush.enabled: true` ### Steps 1. Enable memory flush in config 2. Have a long conversation that triggers context compaction 3. After compaction, check `memory/YYYY-MM-DD.md` ### Expected Memory file contains flushed conversation context after compaction ### Actual (before fix) Memory file has no content from the conversation before compaction ## Evidence - [x] Failing test/log before + passing after All 481 tests in `src/auto-reply/reply/` pass, including 4 new test cases: - `triggers after missed compaction even when tokens are low` - `triggers after missed compaction with stale totals` - `triggers when compactionCount exceeds memoryFlushCompactionCount` - `skips for fresh session without any compaction` ## Human Verification (required) - Verified: All existing `shouldRunMemoryFlush` tests pass with updated logic - Verified: Dedup check (`lastFlushAt >= compactionCount`) still prevents duplicate flushes - Verified: Fresh sessions without compaction (`compactionCount: 0`) fall through to proactive threshold check - Edge cases: `memoryFlushCompactionCount: undefined` vs `0`, `totalTokensFresh: false` with compaction - Not verified: Full e2e with running compaction (requires live model call) ## Compatibility / Migration - Backward compatible? `Yes` - Config/env changes? `No` - Migration needed? `No` ## Failure Recovery (if this breaks) - Set `compaction.memoryFlush.enabled: false` to disable the feature entirely - The change is purely additive — it triggers flushes that previously never happened - Known bad symptoms: unexpected memory file writes after compaction (intended behavior) ## Risks and Mitigations - Risk: Memory flush adds one extra embedded LLM call per compaction cycle - Mitigation: This is the documented and expected behavior — the feature was broken, not absent. The flush is a lightweight embedded call that writes to a local file. --- 🤖 Generated with Claude Code (issue-hunter-pro) <!-- greptile_comment --> <h3>Greptile Summary</h3> Reordered `shouldRunMemoryFlush()` logic to check compaction status before token thresholds. Memory flush now triggers reliably after compaction instead of being silently skipped. **Key changes:** - Moved compaction-based flush check before threshold-based check in `shouldRunMemoryFlush()` (memory-flush.ts:122-137) - Prevents duplicate flushes via early return when `lastFlushAt >= compactionCount` (line 126) - Falls through to proactive threshold check only for fresh sessions (`compactionCount === 0`) - Added 4 new test cases covering post-compaction flush scenarios **How this fixes the bug:** The original code checked token thresholds first, then dedupe. Since compaction resets `totalTokens` to a lower value before the next turn, sessions after compaction would fail the threshold check and never reach the dedupe logic. By checking `compactionCount > 0` first (regardless of token count), the flush now triggers reliably after each compaction cycle. <h3>Confidence Score: 5/5</h3> - This PR is safe to merge with minimal risk - The change is a focused logic reordering that fixes a well-documented bug. The fix is covered by comprehensive tests (4 new test cases plus existing tests), preserves backward compatibility, and has clear reasoning. The deduplication guard prevents any double-flush issues, and the fallback to proactive flush for fresh sessions maintains existing behavior. - No files require special attention <sub>Last reviewed commit: dca1af9</sub> <!-- greptile_other_comments_section --> <!-- /greptile_comment -->

Most Similar PRs