#19384: Auto-reply: allow xhigh for OpenAI-compatible provider aliases
size: XS
## Summary
- fix `/thinking xhigh` behavior for xhigh-capable Codex models when routed through OpenAI-compatible provider aliases
- keep strict provider/model allowlist match when available
- add fallback to model-id allowlist so provider aliases/proxies do not get downgraded to `high`
- add e2e regression coverage for `openai-compatible/gpt-5.3-codex-spark`
## Problem
OpenClaw was responding with:
`Thinking level set to high (xhigh not supported for ... )`
for model IDs that do support `xhigh`, when the provider key was an alias and not the exact provider key in the allowlist.
## Root cause
`supportsXHighThinking()` returned only exact `${provider}/${model}` matches whenever a provider key was present, which skipped valid model-id fallback behavior.
## Fix
In `supportsXHighThinking()`:
- return `true` on exact provider/model allowlist hit
- otherwise fallback to model-id allowlist for known xhigh-capable model IDs
This keeps behavior generic for OpenAI-compatible providers while still preserving explicit allowlist behavior.
## Validation
- added regression test: accepts `/thinking xhigh` for `openai-compatible/gpt-5.3-codex-spark`
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
This PR fixes a bug where `/thinking xhigh` was being downgraded to `high` for xhigh-capable model IDs (e.g., `gpt-5.3-codex-spark`) when the provider key was an unrecognized alias (e.g., `openai-compatible`) rather than an exact entry in `XHIGH_MODEL_SET`.
The fix in `supportsXHighThinking()` is minimal and correct: it short-circuits with `true` on an exact provider/model allowlist hit, then falls back to the model-ID-only set (`XHIGH_MODEL_IDS`) for all other provider keys. This properly handles OpenAI-compatible proxies and provider aliases that route to known xhigh-capable models.
Key observations:
- The `XHIGH_MODEL_IDS` set is derived from `XHIGH_MODEL_REFS` by stripping the provider prefix. It includes `gpt-5.2` (from both `openai/gpt-5.2` and `github-copilot/gpt-5.2`), so any provider key paired with `gpt-5.2` will now get xhigh — this is intentional and consistent with the PR's stated goal.
- The regression test follows established conventions (`withTempHome`, `runThinkingDirective`) and is correctly placed inside the `describe` block alongside similar tests.
- All other call sites of `supportsXHighThinking` (`directive-handling.impl.ts`, `get-reply-run.ts`, `commands/agent.ts`, `cron/isolated-agent/run.ts`, `gateway/sessions-patch.ts`) benefit from this fix automatically with no changes needed.
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge — the change is minimal, well-reasoned, and covered by a new regression test.
- The fix is a two-line logic change with no edge-case regressions: exact allowlist matches are preserved, and the model-ID fallback only activates for model IDs already present in the xhigh allowlist. The new e2e test directly reproduces the reported bug. Behavior for all existing callers is unchanged for known provider/model pairs.
- No files require special attention.
<sub>Last reviewed commit: b095acf</sub>
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#7137: fix: add openai-codex/gpt-5.2 to XHIGH_MODEL_REFS
by sauerdaniel · 2026-02-02
90.1%
#20620: feat: add anthropic/claude-opus-4-6 to XHIGH_MODEL_REFS
by chungjchris · 2026-02-19
85.3%
#21614: fix: warn when thinking level xhigh falls back for unsupported models
by lbo728 · 2026-02-20
81.5%
#23532: feat(copilot): add gpt-5.3-codex to GitHub Copilot provider with xh...
by seans-openclawbot · 2026-02-22
81.5%
#19311: feat: add github-copilot gpt-5.3-codex with xhigh support (AI-assis...
by mrutunjay-kinagi · 2026-02-17
80.6%
#11561: fix: respect supportsReasoningEffort compat flag for xAI/Grok reaso...
by baxter-lindsaar · 2026-02-08
78.5%
#19407: fix(agents): strip thinking blocks on cross-provider model switch (...
by lailoo · 2026-02-17
76.9%
#6673: fix: preserve allowAny flag in createModelSelectionState for custom...
by tenor0 · 2026-02-01
76.7%
#9822: fix: allow local/custom model providers for sub-agent inference
by stammtobias91 · 2026-02-05
76.6%
#16298: feat(xai): switch grok-4-1-fast variants by thinking level
by avirweb · 2026-02-14
76.4%