#13229: Fix openai-codex/gpt-5.3-codex API format resolution
agents
stale
Cluster:
Context Window and Model Updates
## Description
Fixes #13189
This PR fixes an issue where `openai-codex/gpt-5.3-codex` and `gpt-5.2-codex` models were incorrectly resolved with `api: "openai-completions"` when they existed in the built-in model catalog, causing HTTP 500 errors.
## Problem
When these models exist in the registry with the wrong API format, the model resolver would return them immediately without applying the necessary Codex-specific overrides. This caused requests to fail because they were sent to the wrong endpoint.
## Solution
Added a check in `resolveModel()` that detects when an openai-codex model (gpt-5.3-codex or gpt-5.2-codex) is found in the registry and applies the correct overrides:
- `api: "openai-codex-responses"`
- `baseUrl: "https://chatgpt.com/backend-api"`
## Testing
- Added comprehensive test cases to verify the fix works correctly
- All existing tests continue to pass
- New tests specifically cover the scenario where models exist in the registry with wrong configuration
## Files Changed
- `src/agents/pi-embedded-runner/model.ts` - Added override logic for Codex models
- `src/agents/pi-embedded-runner/model.test.ts` - Added test coverage for the fix
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This change updates `resolveModel()` to apply Codex-specific overrides (`api: "openai-codex-responses"`, `baseUrl: "https://chatgpt.com/backend-api"`) even when a matching `openai-codex` model is found in the discovered model registry, preventing Codex requests from being routed to the `openai-completions` endpoint.
Test coverage was extended in `src/agents/pi-embedded-runner/model.test.ts` to cover the regression scenario where `gpt-5.3-codex` and `gpt-5.2-codex` exist in the registry with an incorrect `api` and/or `baseUrl`, ensuring the resolver returns the corrected configuration.
<h3>Confidence Score: 4/5</h3>
- This PR is likely safe to merge after addressing the Codex ID matching gap noted below.
- The override logic is narrowly scoped and covered by new tests for the reported regression. The main remaining concern is that the Codex override currently only matches exact IDs, which can allow variant IDs (if used) to bypass the fix and continue resolving to the wrong API/baseUrl.
- src/agents/pi-embedded-runner/model.ts
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#11882: fix: accept openai-codex/gpt-5.3-codex model refs
by jackberger03 · 2026-02-08
88.3%
#6053: fix: use 400K context window instead of 200K if the model allows (g...
by icedac · 2026-02-01
81.1%
#12220: fix: forward-compat models now respect user-configured contextWindow
by Batuhan4 · 2026-02-09
79.1%
#17531: fix(auth): sync Codex CLI credentials into auth profile store and c...
by sauerdaniel · 2026-02-15
77.9%
#2353: fix: ensure api field is set for inline provider models
by sbknana · 2026-01-26
76.8%
#6603: fix: use allowAny flag instead of size check for model override val...
by gavinbmoore · 2026-02-01
76.5%
#16766: fix(model): apply provider baseUrl/headers override to registry-fou...
by dzianisv · 2026-02-15
76.4%
#9583: fix(models): allow models in agents.defaults.models even if not in ...
by hotzen100 · 2026-02-05
76.1%
#11349: fix(agents): do not filter fallback models by models allowlist
by liuxiaopai-ai · 2026-02-07
76.1%
#11198: fix(models): strip @profile suffix from model selection
by mcaxtr · 2026-02-07
75.6%