#2353: fix: ensure api field is set for inline provider models
Cluster:
Model Input and Streaming Fixes
## Summary
When a model is found in the inline provider config (`models.providers.*.models`), the `api` field was not being set, causing `"Unhandled API in mapOptionsForApi: undefined"` errors when using custom OpenAI-compatible providers like Ollama or LM Studio.
This fix ensures the `api` field is inherited from:
1. The model config itself (if specified)
2. The provider config's `api` field
3. Falls back to `"openai-responses"` (consistent with the fallback model behavior on line 73)
## Problem
When configuring a custom provider like this:
```json
{
"models": {
"providers": {
"openai": {
"baseUrl": "http://localhost:1234/v1",
"apiKey": "lm-studio",
"models": [
{
"id": "qwen/qwen3-vl-8b",
"name": "Qwen3 VL 8B",
"contextWindow": 16384
}
]
}
}
}
}
```
The model resolution would find `inlineMatch` but return it without an `api` field, causing the pi-ai library's `mapOptionsForApi` function to throw an error.
## Solution
Added logic to ensure the `api` field is set when returning an inline model match, mirroring the existing fallback behavior.
## Test plan
- [x] Tested with LM Studio as OpenAI-compatible backend
- [x] Verified Discord channel receives responses correctly after fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR fixes inline-provider model resolution in `src/agents/pi-embedded-runner/model.ts` by ensuring models discovered from `cfg.models.providers.*.models` inherit an `api` value (model-level `api` → provider-level `api` → `"openai-responses"`). This aligns inline matches with the existing fallback model behavior so downstream `pi-ai` API option mapping no longer sees `api: undefined` for OpenAI-compatible custom backends.
One edge case to revisit: inline model/provider matching is done using normalized provider ids, but the added lookup pulls provider config via the raw `inlineMatch.provider` string, which can cause the provider `api` to be missed when config keys differ only by case/whitespace.
<h3>Confidence Score: 4/5</h3>
- This PR is likely safe to merge and addresses a real runtime error, with a minor edge-case risk around provider key normalization.
- The change is small and localized and should prevent `api` from being undefined for inline provider models. The main remaining concern is that provider config is fetched via an unnormalized key, so some configs could still silently fall back to `openai-responses` instead of the intended provider API.
- src/agents/pi-embedded-runner/model.ts
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#21977: Preserve provider API for discovered Ollama models
by graysurf · 2026-02-20
85.1%
#9212: fix: ensure model.input is always an array for custom providers
by sparck75 · 2026-02-05
84.0%
#9822: fix: allow local/custom model providers for sub-agent inference
by stammtobias91 · 2026-02-05
83.2%
#3322: fix: merge provider config api into registry model
by nulone · 2026-01-28
82.1%
#16766: fix(model): apply provider baseUrl/headers override to registry-fou...
by dzianisv · 2026-02-15
80.8%
#7570: fix: allow models from providers with auth profiles configured
by DonSqualo · 2026-02-03
80.6%
#13626: fix(model): propagate provider model properties in fallback resolution
by mcaxtr · 2026-02-10
80.2%
#15632: fix: use provider-qualified key in MODEL_CACHE for context window l...
by linwebs · 2026-02-13
79.2%
#15205: fix(models): normalize google-antigravity api field from google-gem...
by wboudy · 2026-02-13
79.1%
#16290: fix: add field-level validation for custom LLM provider config
by superlowburn · 2026-02-14
79.1%