#10742: Feature/remote ollama - enable autodiscovery ollama models on hosts other than localhost
cli
commands
agents
stale
Cluster:
Ollama Model Enhancements
It became aparent that use of Ollama model autodiscovery was being hindered vy the limitation that autodiscovery only worked on localhost. I believe this to be a more robust and up-to-date implementation that should deprecate @koushikkethamakka's excellany start on this from a few days ago in PR https://github.com/openclaw/openclaw/pull/8693
This implementation adds inteligent timeouts and retries as well as adding the --discover argument to the `openclaw models list` command which effectively increases timeouts and implements retries so we don't see the silent failures we were seeing previously. The behavior absent this argument remains unchanged.
More critically, this implementation cleanly addresses the partial definition of an ollama provider in `~/.openclaw/openclaw.json` such that users can define the apiHost and still retain robust model autodiscovery features **AND** support for definition of the OLLAMA_API_BASE_URL` which will in conjunction with the `OLLAMA_API_KEY` environment variable (which continues to operate as intended) causes the base URL to be configurable absent any explicit ollama provider configuration in the main config file. This directly addresses Issue https://github.com/openclaw/openclaw/issues/8663
Most Similar PRs
#4782: fix: Auto-discover Ollama models without requiring explicit API key
by spiceoogway · 2026-01-30
70.7%
#19612: feat(onboarding): add Ollama to onboarding provider list
by ParthSareen · 2026-02-18
65.3%
#9660: fix: auto-default baseUrl for Ollama provider (#9652)
by divol89 · 2026-02-05
64.1%
#18587: fix(ollama): improve timeout handling and cooldown logic for local ...
by manthis · 2026-02-16
64.0%
#11877: feat(ollama): auto-detect vision capability via /api/show
by Nina-VanKhan · 2026-02-08
63.0%
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
62.9%
#21199: Models: suppress repeated vLLM/Ollama discovery warnings (#21037)
by itsishant · 2026-02-19
62.2%
#11875: fix(ollama): accept /model directive for configured providers
by Nina-VanKhan · 2026-02-08
62.2%
#23776: feat(ollama): auto-register Ollama provider with placeholder key an...
by jayy-77 · 2026-02-22
61.3%
#21096: [codex] Add host-gateway + Ollama API env defaults to compose
by BoarderOnATrip · 2026-02-19
60.2%