#19593: feat(compaction): proactive handover before context overflow
agents
size: S
Cluster:
Memory Compaction Improvements
## Summary
Proactive handover monitors token usage after each agent turn and triggers compaction **before** context overflow occurs, rather than waiting for the API to reject a prompt that's too large.
## Problem
Currently, compaction is only triggered reactively when the API returns a context overflow error. By that point the conversation may already be degraded — the agent loses continuity and the user experiences an unexpected interruption.
## Solution
After each successful agent turn, estimate current token usage and compare against a configurable threshold. If remaining tokens fall below the threshold, proactively trigger compaction to preserve conversation quality.
### How it works
1. After each turn completes (no error, not aborted), estimate total message tokens via \estimateMessagesTokens()\
2. Compare remaining tokens (\contextWindow - currentTokens\) against \compaction.proactiveThresholdTokens\
3. If below threshold **and** at least 5 user messages have occurred (minimum guard), trigger \compactEmbeddedPiSessionDirect()\
4. On success, inject a system note into the session so the agent is aware of the handover on the next turn
5. Set a cooldown flag to prevent re-triggering within the same run
### Safety guards
- **Minimum turn guard**: Requires at least 5 user messages before triggering (avoids compacting near-empty sessions)
- **Cooldown flag**: \proactiveHandoverTriggered\ prevents multiple compactions in the same run
- **Graceful failure**: If compaction fails, logs a warning and continues normally — no user-facing error
- **Opt-in**: Disabled by default; requires explicit configuration
## Configuration
\\\yaml
agents:
defaults:
compaction:
proactiveThresholdTokens: 20000 # trigger when <= 20k tokens remain
\\\
Set to \
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
This PR implements proactive context compaction that triggers before overflow occurs. After each successful agent turn, it estimates token usage and triggers compaction when remaining tokens fall below a configurable threshold.
**Key changes:**
- Added `proactiveThresholdTokens` config option to control when proactive handover triggers
- Implemented token estimation and threshold check after successful turns
- Added safety guards: minimum 5 user messages, cooldown flag, graceful failure handling
- Injects notification message into session after successful compaction
- Added `proactiveHandover` field to run metadata for observability
The implementation is opt-in (disabled by default) and follows existing patterns for reactive compaction. Guards prevent premature triggering on short sessions and multiple compactions within the same run.
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge with minimal risk
- The implementation is well-designed with multiple safety guards, follows existing code patterns, handles errors gracefully, and is opt-in by default. The logic is straightforward and the changes are isolated to the agent runner.
- No files require special attention
<sub>Last reviewed commit: efdf074</sub>
<!-- greptile_other_comments_section -->
<sub>(5/5) You can turn off certain types of comments like style [here](https://app.greptile.com/review/github)!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#4042: agents: add proactive compaction before request
by freedomzt · 2026-01-29
86.7%
#9620: fix: increase auto-compaction reserve buffer to 40k tokens
by Arlo83963 · 2026-02-05
80.3%
#10505: feat(compaction): add timeout, model override, and diagnostic logging
by thebtf · 2026-02-06
80.1%
#5360: fix(compaction): add emergency pruning for context overflow
by sgwannabe · 2026-01-31
80.0%
#15322: feat: post-compaction target token trimming + fallback strategy
by echoVic · 2026-02-13
80.0%
#14887: feat(compaction): configurable auto-compaction notifications with o...
by seilk · 2026-02-12
79.6%
#20038: (fix): Compaction: preserve recent context and sync session memory ...
by rodrigouroz · 2026-02-18
79.5%
#19878: fix: Handle compaction when fallback model has smaller context window
by gaurav10gg · 2026-02-18
79.3%
#18663: feat: progressive compaction escalation and mechanical flush fallback
by Adamya05 · 2026-02-16
79.0%
#14021: feat(compaction): optional memory flush before manual /compact
by phenomenoner · 2026-02-11
78.9%