#6053: fix: use 400K context window instead of 200K if the model allows (gpt-5.2)
agents
Cluster:
Context Window and Model Updates
## Summary
- OpenAI Codex models (`openai-codex/gpt-5.2`) have 400k context windows
- Sessions were auto-clearing at ~200k because `lookupContextTokens()` returned `undefined` for prefixed model IDs
- Added static fallback lookup using `MODEL_CONTEXT_WINDOWS` from `opencode-zen-models.ts`
## Changes
- Export `MODEL_CONTEXT_WINDOWS` from `opencode-zen-models.ts`
- Add static fallback in `lookupContextTokens()` for cache misses
- Handle prefixed model IDs by extracting bare model ID (e.g., `openai-codex/gpt-5.2` → `gpt-5.2`)
## Test plan
- [x] `pnpm build` passes
- [x] `pnpm test` passes (4924 tests)
- [x] `pnpm lint` passes for changed files
- [ ] Manual: verify `openai-codex/gpt-5.2` sessions don't clear at 200k
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR fixes incorrect context window inference for OpenAI Codex model refs like `openai-codex/gpt-5.2` by adding a static fallback lookup in `lookupContextTokens()` when the dynamic models cache misses. It also exports `MODEL_CONTEXT_WINDOWS` from the OpenCode Zen model catalog so that context window defaults can be reused, and updates the xhigh-thinking allowlist to include the new `openai-codex/gpt-5.2` ref.
Overall, the change is consistent with the existing “best-effort cache, then fallback” approach used in model discovery, and should prevent sessions from auto-clearing early when the configured model id includes a provider prefix.
<h3>Confidence Score: 4/5</h3>
- This PR is low-risk and likely safe to merge, with a minor normalization edge case to consider.
- Changes are small, covered by existing test suite per the PR description, and primarily add a fallback read path. The main residual risk is that model IDs may include more complex prefixes/suffixes than a single `/` segment, in which case the fallback may still miss.
- src/agents/context.ts
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#11882: fix: accept openai-codex/gpt-5.3-codex model refs
by jackberger03 · 2026-02-08
83.1%
#12220: fix: forward-compat models now respect user-configured contextWindow
by Batuhan4 · 2026-02-09
82.4%
#15632: fix: use provider-qualified key in MODEL_CACHE for context window l...
by linwebs · 2026-02-13
81.6%
#13229: Fix openai-codex/gpt-5.3-codex API format resolution
by trevorgordon981 · 2026-02-10
81.1%
#14744: fix(context): key MODEL_CACHE by provider/modelId to prevent collis...
by lailoo · 2026-02-12
80.4%
#17604: fix(context): use getAvailable() to prevent cross-provider model ID...
by aldoeliacim · 2026-02-16
80.2%
#15726: fix(sessions): use model contextWindow instead of agent contextToke...
by lailoo · 2026-02-13
80.0%
#23136: fix: lookupContextTokens should handle provider/model refs
by patchguardio · 2026-02-22
79.0%
#17414: fix(sessions): refresh contextTokens when model override changes
by michaelbship · 2026-02-15
78.8%
#12195: fix(agents): sync config fallback for lookupContextTokens cold-star...
by mcaxtr · 2026-02-09
78.6%