#7137: fix: add openai-codex/gpt-5.2 to XHIGH_MODEL_REFS
size: XS
## Summary
The model ref `openai-codex/gpt-5.2` was missing from the xhigh allowlist, causing sessions with `thinking='xhigh'` to hang with 0 tokens when using this model configuration.
## Problem
The OpenAI `gpt-5.2` model supports `reasoning.effort: xhigh`, but OpenClaw only had `openai-codex/gpt-5.2-codex` in `XHIGH_MODEL_REFS`. Users with custom provider configs using `gpt-5.2` (without `-codex` suffix) were unable to use xhigh thinking.
## Fix
Add `openai-codex/gpt-5.2` to `XHIGH_MODEL_REFS` array.
## Testing
- [x] Tested locally - xhigh spawn now works (6972 tokens vs 0 tokens before)
- [x] Unit tests pass (`pnpm test -- src/auto-reply/thinking.test.ts`)
## Local Validation
- `pnpm build` ✅
- `pnpm check` ✅
- `pnpm test -- src/auto-reply/thinking.test.ts` ✅
## AI-Assisted
Yes - Claude helped identify and fix the bug.
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR updates the `src/auto-reply/thinking.ts` xhigh allowlist by adding the model ref `openai-codex/gpt-5.2` alongside the existing `openai/gpt-5.2` and `openai-codex/gpt-5.2-codex` entries. This ensures `supportsXHighThinking()` recognizes `gpt-5.2` under the `openai-codex` provider (without the `-codex` suffix), so `listThinkingLevels()` can expose the `xhigh` option for that configuration instead of silently omitting it (which reportedly led to 0-token/hanging sessions).
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge with minimal risk.
- The change is a single additive allowlist entry in `XHIGH_MODEL_REFS`, and the surrounding logic already normalizes/compares these refs via Sets; there are no behavioral changes beyond enabling xhigh for the intended provider/model combination.
- No files require special attention
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#19384: Auto-reply: allow xhigh for OpenAI-compatible provider aliases
by 0x4007 · 2026-02-17
90.1%
#20620: feat: add anthropic/claude-opus-4-6 to XHIGH_MODEL_REFS
by chungjchris · 2026-02-19
86.3%
#23532: feat(copilot): add gpt-5.3-codex to GitHub Copilot provider with xh...
by seans-openclawbot · 2026-02-22
84.0%
#19311: feat: add github-copilot gpt-5.3-codex with xhigh support (AI-assis...
by mrutunjay-kinagi · 2026-02-17
82.3%
#21614: fix: warn when thinking level xhigh falls back for unsupported models
by lbo728 · 2026-02-20
82.2%
#11882: fix: accept openai-codex/gpt-5.3-codex model refs
by jackberger03 · 2026-02-08
77.6%
#6053: fix: use 400K context window instead of 200K if the model allows (g...
by icedac · 2026-02-01
77.4%
#16298: feat(xai): switch grok-4-1-fast variants by thinking level
by avirweb · 2026-02-14
77.3%
#11561: fix: respect supportsReasoningEffort compat flag for xAI/Grok reaso...
by baxter-lindsaar · 2026-02-08
76.5%
#9822: fix: allow local/custom model providers for sub-agent inference
by stammtobias91 · 2026-02-05
76.3%