#21977: Preserve provider API for discovered Ollama models
agents
size: S
Cluster:
Model Input and Streaming Fixes
# Preserve provider API for discovered Ollama models
Closes #20259
## Summary
This patch hardens model resolution so when registry-discovered models are missing `api`, OpenClaw hydrates the API from `models.providers.<provider>.api`. This prevents `mapOptionsForApi` from receiving `undefined` and crashing model calls for custom Ollama setups.
## Problem
- Expected: Resolved models should always carry a valid API transport, inheriting provider-level `api` when model-level `api` is absent.
- Actual: A discovered model can resolve with `api: undefined`, causing `Unhandled API in mapOptionsForApi: undefined`.
- Impact: Runtime failures and dropped agent turns for affected Ollama configurations.
## Reproduction
1. Configure `models.providers.ollama.api = "openai-completions"` with custom model entries.
2. Resolve/run with a discovered Ollama model where registry entry lacks explicit `model.api`.
- Expected result: Resolved model uses `openai-completions` API path.
- Actual result: Resolved model may carry `api: undefined` and crash downstream transport mapping.
## Issues Found
Severity: high
Confidence: high
Status: fixed
| ID | Severity | Confidence | Area | Summary | Evidence | Status |
| --- | --- | --- | --- | --- | --- | --- |
| PR-21977-BUG-01 | high | high | `src/agents/pi-embedded-runner/model.ts` | Discovered model path returned `api: undefined` without provider-level fallback | `src/agents/pi-embedded-runner/model.ts:111` | fixed |
## Fix Approach
- Resolve provider config once (including normalized provider key fallback).
- For discovered models, if `model.api` is missing and provider config has `api`, hydrate `model.api` from provider config before returning.
- Add unit coverage for discovered Ollama model with missing `api` to ensure provider fallback is applied.
## Testing
- `pnpm test src/agents/pi-embedded-runner/model.test.ts` (pass)
- `pnpm lint` (pass)
- `pnpm tsgo` (pass)
## Risk / Notes
- Low-risk scoped change in model resolution only; fallback applies only when discovered model `api` is missing.
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Correctly fixes discovered Ollama models missing `api` by hydrating from provider config. The implementation moves provider config resolution earlier and adds fallback logic to hydrate `api` from `models.providers.<provider>.api` when a discovered model's `api` is undefined. This prevents runtime crashes in downstream transport mapping.
Key changes:
- Early provider config resolution with normalized provider ID lookup (more robust than before)
- API hydration for discovered models when `api` is missing
- Test coverage for the Ollama use case with `api: undefined`
The fix is well-targeted and handles the edge case properly with optional chaining and conditional hydration.
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge with minimal risk
- The change is narrowly scoped to model resolution, adds defensive hydration logic with proper null checks, includes test coverage for the primary use case, and actually improves provider config resolution with normalized ID lookup. The logic correctly uses conditional checks to only hydrate when `api` is missing, and the double normalization is safe/idempotent.
- No files require special attention
<sub>Last reviewed commit: 7bae3d5</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#2353: fix: ensure api field is set for inline provider models
by sbknana · 2026-01-26
85.1%
#18587: fix(ollama): improve timeout handling and cooldown logic for local ...
by manthis · 2026-02-16
82.4%
#4782: fix: Auto-discover Ollama models without requiring explicit API key
by spiceoogway · 2026-01-30
82.0%
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
81.1%
#5115: fix: guard against undefined model.name in Ollama discovery (#5062)
by TheWildHustle · 2026-01-31
79.2%
#13626: fix(model): propagate provider model properties in fallback resolution
by mcaxtr · 2026-02-10
79.1%
#9822: fix: allow local/custom model providers for sub-agent inference
by stammtobias91 · 2026-02-05
78.9%
#23286: fix: use configured model in llm-slug-generator instead of hardcoded …
by wsman · 2026-02-22
78.6%
#3322: fix: merge provider config api into registry model
by nulone · 2026-01-28
78.5%
#9212: fix: ensure model.input is always an array for custom providers
by sparck75 · 2026-02-05
78.1%