#9212: fix: ensure model.input is always an array for custom providers
agents
stale
Cluster:
Model Input and Streaming Fixes
## Summary
Fixes #8372 - `Cannot read properties of undefined (reading 'includes')` error when using custom providers with custom models.
## Problem
When resolving models from custom providers via `modelRegistry.find()`, the returned model may be missing the `input` field. This causes pi-ai to crash when it tries to call `model.input.includes("image")` in `openai-completions.ts`.
## Root Cause
In `src/agents/pi-embedded-runner/model.ts`, the `resolveModel()` function returns models from `modelRegistry.find()` without ensuring the `input` field exists. While the fallback paths (inline models and fallback models) correctly set `input: ["text"]`, models found in the registry may not have this field defined.
## Solution
Added a safety check after `modelRegistry.find()` to ensure `model.input` is always an array before returning:
```typescript
const normalized = normalizeModelCompat(model);
// Ensure input field is always present to prevent pi-ai crashes when
// model.input.includes() is called on models from custom providers.
// See: https://github.com/openclaw/openclaw/issues/8372
if (!normalized.input || !Array.isArray(normalized.input)) {
normalized.input = ["text"];
}
return { model: normalized, authStorage, modelRegistry };
```
## Testing
- Tested with custom provider configuration connecting to a local Ollama instance
- Previously crashed immediately with `TypeError: Cannot read properties of undefined (reading 'includes')`
- After fix: Successfully processes messages without error
## Configuration Example (that was failing before)
```yaml
models:
providers:
local-llm:
baseUrl: "http://127.0.0.1:4002/v1"
apiKey: "local-llm-key"
api: "openai-chat"
models:
- id: "deepseek-local"
name: "DeepSeek R1 8B"
contextWindow: 8192
maxTokens: 4096
```
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This change updates `resolveModel()` (`src/agents/pi-embedded-runner/model.ts`) so models returned from `modelRegistry.find()` are normalized and always have a valid `input` array. If a custom provider’s model definition omits `input` (or provides a non-array), the resolver now defaults it to `["text"]`, preventing downstream crashes where code calls `model.input.includes(...)`.
This fits the existing resolver behavior: the inline-model and fallback-model paths already supply `input: ["text"]`; this patch brings registry-resolved models in line with those expectations.
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge with minimal risk.
- The change is narrowly scoped to normalizing a missing/invalid `model.input` field for registry-resolved models, aligning behavior with existing fallback paths and preventing a known runtime crash. No API surface changes and no broader control-flow impact were found in review.
- src/agents/pi-embedded-runner/model.ts
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#12191: fix: guard against undefined model.input in display and scan layers
by mcaxtr · 2026-02-09
84.4%
#2353: fix: ensure api field is set for inline provider models
by sbknana · 2026-01-26
84.0%
#9822: fix: allow local/custom model providers for sub-agent inference
by stammtobias91 · 2026-02-05
82.1%
#3322: fix: merge provider config api into registry model
by nulone · 2026-01-28
81.5%
#13626: fix(model): propagate provider model properties in fallback resolution
by mcaxtr · 2026-02-10
80.8%
#7570: fix: allow models from providers with auth profiles configured
by DonSqualo · 2026-02-03
80.4%
#6673: fix: preserve allowAny flag in createModelSelectionState for custom...
by tenor0 · 2026-02-01
79.6%
#16766: fix(model): apply provider baseUrl/headers override to registry-fou...
by dzianisv · 2026-02-15
79.4%
#11882: fix: accept openai-codex/gpt-5.3-codex model refs
by jackberger03 · 2026-02-08
78.8%
#21638: fix(models): surface models.json validation errors instead of faili...
by aldoeliacim · 2026-02-20
78.4%