← Back to PRs

#19384: Auto-reply: allow xhigh for OpenAI-compatible provider aliases

by 0x4007 open 2026-02-17 18:48 View on GitHub →
size: XS
## Summary - fix `/thinking xhigh` behavior for xhigh-capable Codex models when routed through OpenAI-compatible provider aliases - keep strict provider/model allowlist match when available - add fallback to model-id allowlist so provider aliases/proxies do not get downgraded to `high` - add e2e regression coverage for `openai-compatible/gpt-5.3-codex-spark` ## Problem OpenClaw was responding with: `Thinking level set to high (xhigh not supported for ... )` for model IDs that do support `xhigh`, when the provider key was an alias and not the exact provider key in the allowlist. ## Root cause `supportsXHighThinking()` returned only exact `${provider}/${model}` matches whenever a provider key was present, which skipped valid model-id fallback behavior. ## Fix In `supportsXHighThinking()`: - return `true` on exact provider/model allowlist hit - otherwise fallback to model-id allowlist for known xhigh-capable model IDs This keeps behavior generic for OpenAI-compatible providers while still preserving explicit allowlist behavior. ## Validation - added regression test: accepts `/thinking xhigh` for `openai-compatible/gpt-5.3-codex-spark` <!-- greptile_comment --> <h3>Greptile Summary</h3> This PR fixes a bug where `/thinking xhigh` was being downgraded to `high` for xhigh-capable model IDs (e.g., `gpt-5.3-codex-spark`) when the provider key was an unrecognized alias (e.g., `openai-compatible`) rather than an exact entry in `XHIGH_MODEL_SET`. The fix in `supportsXHighThinking()` is minimal and correct: it short-circuits with `true` on an exact provider/model allowlist hit, then falls back to the model-ID-only set (`XHIGH_MODEL_IDS`) for all other provider keys. This properly handles OpenAI-compatible proxies and provider aliases that route to known xhigh-capable models. Key observations: - The `XHIGH_MODEL_IDS` set is derived from `XHIGH_MODEL_REFS` by stripping the provider prefix. It includes `gpt-5.2` (from both `openai/gpt-5.2` and `github-copilot/gpt-5.2`), so any provider key paired with `gpt-5.2` will now get xhigh — this is intentional and consistent with the PR's stated goal. - The regression test follows established conventions (`withTempHome`, `runThinkingDirective`) and is correctly placed inside the `describe` block alongside similar tests. - All other call sites of `supportsXHighThinking` (`directive-handling.impl.ts`, `get-reply-run.ts`, `commands/agent.ts`, `cron/isolated-agent/run.ts`, `gateway/sessions-patch.ts`) benefit from this fix automatically with no changes needed. <h3>Confidence Score: 5/5</h3> - This PR is safe to merge — the change is minimal, well-reasoned, and covered by a new regression test. - The fix is a two-line logic change with no edge-case regressions: exact allowlist matches are preserved, and the model-ID fallback only activates for model IDs already present in the xhigh allowlist. The new e2e test directly reproduces the reported bug. Behavior for all existing callers is unchanged for known provider/model pairs. - No files require special attention. <sub>Last reviewed commit: b095acf</sub> <!-- greptile_other_comments_section --> <sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub> <!-- /greptile_comment -->

Most Similar PRs