#13512: feat(context-pruning): add stripThinking option to strip thinking blocks from older messages
agents
stale
Cluster:
Model Reasoning Fixes
Problem: reasoning models with extended thinking keep large thinking blocks in history, and cache-ttl pruning never removes them, so old thinking consumes context.
Add contextPruning.stripThinking to remove thinking blocks outside keepLastAssistants during pruning. It leaves a blank text block if stripping empties the assistant content and preserves the recent tail. Default is false and tests cover stripThinking on and off plus tail preservation.
Testing: existing + new tests pass; verified with Claude Opus 4.6 and thinkingDefault: high.
References #13519
Most Similar PRs
#23462: fix: extract thinking blocks as fallback in extractTextFromChatContent
by nszhsl · 2026-02-22
63.5%
#10465: Context pruning: strip image blocks instead of skipping
by quentintou · 2026-02-06
63.4%
#19407: fix(agents): strip thinking blocks on cross-provider model switch (...
by lailoo · 2026-02-17
61.8%
#6685: fix: suppress thinking leak for Synthetic reasoning models
by AkiLetschne · 2026-02-01
61.0%
#17304: feat(gemini): robust handling for non-XML reasoning headers (`Think...
by YoshiaKefasu · 2026-02-15
60.3%
#10097: fix: add empty thinking blocks to tool call messages when thinking is…
by cyxer000 · 2026-02-06
60.1%
#10217: docs: add Anthropic adaptive thinking documentation for Opus 4.6
by GodsBoy · 2026-02-06
60.0%
#11010: fix(control-ui): hide tool call cards when thinking toggle is off
by Annaxiebot · 2026-02-07
59.5%
#20620: feat: add anthropic/claude-opus-4-6 to XHIGH_MODEL_REFS
by chungjchris · 2026-02-19
59.4%
#13388: feat(session): Auto-prune consecutive NO_REPLY messages from context
by Masha-L · 2026-02-10
59.2%