#8660: fix: respect agents.defaults.models.*.params.maxTokens in image tool
agents
stale
## Summary
- Image tool was using hardcoded `maxTokens: 512`, ignoring model-specific config
- This caused models with thinking/reasoning to exhaust tokens before producing output
- Result: "Image model returned no text" errors
## Changes
- Added `resolveImageMaxTokens()` helper to read `agents.defaults.models[modelKey].params.maxTokens`
- Falls back to 512 for backwards compatibility when not configured
## Configuration Example
```json
{
"agents": {
"defaults": {
"models": {
"openrouter/google/gemini-3-pro-preview": {
"params": {
"maxTokens": 8192
}
}
}
}
}
}
```
## Test plan
- [x] Run type checking (`npm run check`)
- [x] Run image tool tests (`vitest run src/agents/tools/image-tool`)
Fixes #8096
🤖 Generated with [Claude Code](https://claude.com/claude-code)
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR removes the hardcoded `maxTokens: 512` in the image tool and instead resolves `maxTokens` from `agents.defaults.models["<provider>/<modelId>"].params.maxTokens`, falling back to 512 when not configured. This aligns image tool completion limits with per-model configuration so reasoning/thinking-enabled models don’t exhaust the output budget before producing text.
The change is localized to `src/agents/tools/image-tool.ts` and only affects the `complete(...)` call for non-MiniMax providers; MiniMax continues using its dedicated `minimaxUnderstandImage` path.
<h3>Confidence Score: 4/5</h3>
- This PR is likely safe to merge and should fix the hardcoded token limit, with a small risk of config-key mismatches preventing the override from applying in some cases.
- The change is small and well-scoped (one helper and one call site). Main uncertainty is whether `${provider}/${modelId}` matches the canonical key format used in `agents.defaults.models` across all registries/providers, which could cause silent fallback to the default.
- src/agents/tools/image-tool.ts (verify model key normalization matches config expectations)
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#7521: fix: increase maxTokens for tool probe to support reasoning models
by jakobdylanc · 2026-02-02
78.3%
#11782: fix: resolve 403 auth error for GithubCopilot imageModel (#10277)
by adamkoncz · 2026-02-08
78.2%
#4459: fix: enable image input for Kimi K2.5 and refresh stale config mode...
by manikv12 · 2026-01-30
77.1%
#11349: fix(agents): do not filter fallback models by models allowlist
by liuxiaopai-ai · 2026-02-07
75.7%
#23467: feat: support image model resolution from media tool config
by sreerevanth · 2026-02-22
75.6%
#18721: fix: prefer configured contextTokens over model catalog in status d...
by MisterGuy420 · 2026-02-17
75.1%
#14640: feat(agents): support per-agent temperature and maxTokens in agents...
by lailoo · 2026-02-12
74.9%
#9583: fix(models): allow models in agents.defaults.models even if not in ...
by hotzen100 · 2026-02-05
74.8%
#19351: fix: enable tool_use/result pairing repair for MiniMax models
by thebtf · 2026-02-17
74.8%
#14744: fix(context): key MODEL_CACHE by provider/modelId to prevent collis...
by lailoo · 2026-02-12
74.7%