#4782: fix: Auto-discover Ollama models without requiring explicit API key
agents
Cluster:
Ollama Model Enhancements
## Summary
Fixes #4544
When Ollama is running locally with models installed, they were showing as 'missing' in `openclaw models list` because the provider was only registered if an explicit API key was configured.
## Root Cause
In `src/agents/models-config.providers.ts`, the `resolveImplicitProviders()` function only added the Ollama provider when an API key was explicitly configured via environment variables or auth profiles.
Since Ollama is a local service that doesn't require traditional API authentication (it runs on localhost:11434), this prevented auto-discovery of locally installed models.
## Changes
- Modified the Ollama provider registration logic to auto-discover models by attempting to query the local instance
- If models are found (`models.length > 0`), the provider is registered automatically
- Uses a placeholder API key `'ollama-local'` for local instances
- Still respects explicit configuration if provided (backward compatible)
## Testing
- ✅ Linted the changed file: no warnings or errors
- ✅ Aligns with patterns used for other local/unauthenticated providers (e.g., qwen-portal with OAuth placeholder)
## Expected Behavior After Fix
1. User runs `ollama pull deepseek-r1:latest`
2. User runs `openclaw models list`
3. Ollama models are now listed as **available** (not missing)
4. User can set default model: `openclaw models set ollama/deepseek-r1:latest`
5. Chat can successfully use the Ollama model
## Impact
- **Low risk**: Only affects Ollama provider discovery
- **High value**: Enables out-of-the-box Ollama support without manual configuration
- **Backward compatible**: Existing configurations with explicit API keys still work
---
**Note**: This change makes OpenClaw's Ollama integration work the same way as other local model servers - auto-discover and register if running.
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR changes implicit provider resolution to auto-register Ollama when a local Ollama instance responds with at least one installed model. It does so by always building the Ollama provider (which triggers `/api/tags` discovery against `http://127.0.0.1:11434`) and, if any models are returned, registering the provider with either an explicit env/auth-profile key or a placeholder `ollama-local` key.
This fits into the existing `resolveImplicitProviders()` pattern, which opportunistically adds providers based on available credentials/profiles and discovery results. The main behavior change is that Ollama discovery is no longer strictly gated by explicit configuration; it can now be activated by local state alone.
<h3>Confidence Score: 4/5</h3>
- This PR is likely safe to merge, but it changes Ollama discovery semantics and may introduce noisy localhost probing/logging in common cases.
- The code change is small and localized, but it alters when discovery runs (potentially affecting users who rely on opt-in behavior or explicit config) and can emit warnings during normal “Ollama not running/no models” scenarios.
- src/agents/models-config.providers.ts
<!-- greptile_other_comments_section -->
**Context used:**
- Context from `dashboard` - CLAUDE.md ([source](https://app.greptile.com/review/custom-context?memory=fd949e91-5c3a-4ab5-90a1-cbe184fd6ce8))
- Context from `dashboard` - AGENTS.md ([source](https://app.greptile.com/review/custom-context?memory=0d0c8278-ef8e-4d6c-ab21-f5527e322f13))
<!-- /greptile_comment -->
Most Similar PRs
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
88.6%
#18587: fix(ollama): improve timeout handling and cooldown logic for local ...
by manthis · 2026-02-16
85.8%
#11875: fix(ollama): accept /model directive for configured providers
by Nina-VanKhan · 2026-02-08
83.4%
#5115: fix: guard against undefined model.name in Ollama discovery (#5062)
by TheWildHustle · 2026-01-31
83.0%
#11877: feat(ollama): auto-detect vision capability via /api/show
by Nina-VanKhan · 2026-02-08
82.3%
#21977: Preserve provider API for discovered Ollama models
by graysurf · 2026-02-20
82.0%
#9822: fix: allow local/custom model providers for sub-agent inference
by stammtobias91 · 2026-02-05
81.3%
#19612: feat(onboarding): add Ollama to onboarding provider list
by ParthSareen · 2026-02-18
80.6%
#16098: fix: omit tools param for models without tool support, surface erro...
by claw-sylphx · 2026-02-14
80.4%
#9257: this is my first fork
by demonking369 · 2026-02-05
80.2%