#19612: feat(onboarding): add Ollama to onboarding provider list
commands
size: M
Cluster:
Ollama Model Enhancements
Ollama has local and cloud model support but was missing from the onboarding wizard, forcing users to manually configure OLLAMA_API_KEY and models.yaml.
This PR adds Ollama as a selectable provider during onboarding with endpoint reachability checks and model auto-discovery to reduce friction for new users.
Closes #8239
## Summary
- **Problem:** Ollama is not listed in the onboarding provider picker, so users cannot select it during `openclaw configure`. They must manually set `OLLAMA_API_KEY` and edit `models.yaml`.
- **Why it matters:** Ollama supports both local and cloud models (Qwen-3.5, GLM-5, MiniMax-M2.5, Kimi-K2.5). Users hitting the onboarding wizard for the first time have no guided path to set it up without having to touch configs.
- **What changed:** Added Ollama to the `AuthChoice`/`AuthChoiceGroupId` types, the grouped provider list, the preferred-provider mapping, a credential helper (`setOllamaApiKey`), and a dedicated `applyAuthChoiceOllama` handler that prompts for endpoint, checks reachability via `/api/tags`, auto-discovers models, and stores a placeholder API key.
- **What did NOT change (scope boundary):** The existing Ollama runtime provider (`buildOllamaProvider`, `ollama-stream.ts`, model discovery) is untouched. No changes to how Ollama works after onboarding. Only the wizard entry path is new.
## Change Type (select all)
- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Docs
- [ ] Security hardening
- [ ] Chore/infra
## Scope (select all touched areas)
- [ ] Gateway / orchestration
- [ ] Skills / tool execution
- [x] Auth / tokens
- [ ] Memory / storage
- [ ] Integrations
- [ ] API / contracts
- [x] UI / DX
- [ ] CI/CD / infra
## Linked Issue/PR
- Closes #8239
- Related #9029, #17493
## User-visible / Behavior Changes
- `openclaw configure` and `openclaw onboard` now shows "Ollama" in the provider picker.
- Selecting Ollama prompts for optional custom endpoint, checks reachability, lists discovered models, and stores credentials automatically.
- If Ollama is not running, the wizard displays install/start instructions and exits gracefully.
## Security Impact (required)
- New permissions/capabilities? `No`
- Secrets/tokens handling changed? `No` — uses the same `upsertAuthProfile` pattern as all other providers. Ollama's "API key" is a dummy placeholder value (`"ollama"`), not a real secret.
- New/changed network calls? `Yes` — the onboarding handler makes a single `GET /api/tags` call to the user-specified Ollama endpoint (default `127.0.0.1:11434`) with an 8-second timeout.
- Command/tool execution surface changed? `No`
- Data access scope changed? `No`
- If any `Yes`, explain risk + mitigation: The network call is localhost-only by default, initiated only when the user explicitly selects Ollama, and uses a short timeout. Custom endpoints are validated as HTTP/HTTPS URLs.
## Repro + Verification
### Environment
- OS: macOS (Darwin 25.2.0)
- Runtime/container: Node 22 + pnpm
- Model/provider: Ollama
- Integration/channel (if any): CLI configure flow
- Relevant config (redacted): default local setup
### Steps
1. Run `pnpm openclaw configure` or `pnpm openclaw onboard`
2. Reach provider selection step
3. Select "Ollama"
### Expected
- Ollama appears in the list with hint "Local + cloud models (GLM-5, MiniMax-M2.5, Kimi-K2.5, Qwen-3.5, GLM-5)"
- Prompts for custom endpoint, checks reachability, discovers models
### Actual
- Ollama models only appear on fallback model picker
## Evidence
- [] Failing test/log before + passing after
- [ ] Trace/log snippets
- [x] Screenshot/recording
- [ ] Perf numbers (if relevant)
<img width="616" height="904" alt="CleanShot 2026-02-17 at 16 58 03@2x" src="https://github.com/user-attachments/assets/3a3d0335-bed3-4d53-be68-f19cfa9a48d5" />
`pnpm build && pnpm check && pnpm test` — 659 suites, 5405 tests all passing. Lint and format clean on all changed files.
## Human Verification (required)
- Verified scenarios:
- TypeScript compiles with no new errors (`pnpm tsgo`)
- Lint passes on all changed files (`pnpm lint`)
- Format passes on all changed files (`pnpm oxfmt --check`)
- Full test suite passes (5405 tests)
- Edge cases checked:
- Ollama not running: wizard shows install instructions, exits gracefully
- Custom endpoint with invalid URL: validation rejects it
- No models pulled: wizard shows "pull a model first" guidance
- Full interactive CLI walkthrough with a live Ollama instance
## Compatibility / Migration
- Backward compatible? `Yes`
- Config/env changes? `No`
- Migration needed? `No`
## Failure Recovery (if this breaks)
- How to disable/revert this change quickly: Revert this PR
- Files/config to restore: The 6 modified files + delete `src/commands/auth-choice.apply.ollama.ts`
- Known bad symptoms reviewers should watch for: Ollama not appearing in provider list (type mismatch), or onboarding crashing when Ollama is selected (handler error)
## Risks and Mitigations
- Risk: Ollama reachability check could hang if the endpoint is slow to respond.
- Mitigation: 8-second `AbortSignal.timeout` on the fetch call, matching the pattern in the existing `dist/` handler.
## Changes
From onboard:
<img width="1110" height="2024" alt="CleanShot 2026-02-17 at 17 07 56@2x" src="https://github.com/user-attachments/assets/f0a34cf2-ff71-47a9-8778-1da6d04946f6" />
from configure:
<img width="1086" height="1542" alt="CleanShot 2026-02-17 at 17 08 34@2x" src="https://github.com/user-attachments/assets/e2f69b0e-fd79-4c8e-82ee-dbccf62df875" />
Failure to find:
<img width="704" height="776" alt="CleanShot 2026-02-17 at 17 01 24@2x" src="https://github.com/user-attachments/assets/2b9a715a-8d4d-4dfc-bbd2-5b949561cbdd" />
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Added Ollama as a selectable provider in the onboarding wizard (`openclaw configure` / `openclaw onboard`). The implementation follows existing patterns for provider handlers, particularly the vLLM handler for local/self-hosted providers.
**Key changes:**
- Added `ollama` to type definitions (`AuthChoice`, `AuthChoiceGroupId`) and provider selection UI
- Created dedicated handler `auth-choice.apply.ollama.ts` that prompts for optional custom endpoint, checks reachability via `/api/tags`, discovers models, and stores credentials
- Integrated handler into the apply chain in `auth-choice.apply.ts`
- Added `setOllamaApiKey` credential helper and `OLLAMA_DEFAULT_MODEL_REF` constant
- Implemented intelligent model selection with preferred cloud models (`glm-5:cloud`, `kimi-k2.5:cloud`, `minimax-m2.5:cloud`, `glm-4.7:flash`)
**Implementation quality:**
- Follows established patterns from vLLM and other provider handlers
- Uses 8-second timeout for reachability check (consistent with existing code)
- Properly validates HTTP/HTTPS URLs for custom endpoints
- Correctly appends `/v1` suffix to baseUrl for OpenAI-compatible API (per existing Ollama provider implementation)
- Graceful error handling when Ollama is not reachable with helpful install/start instructions
- Type-safe throughout with no `any` usage
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge with minimal risk
- The implementation closely follows established patterns from similar provider handlers (vLLM, MiniMax, etc). All changes are additive with no modifications to existing provider logic. The handler includes proper error handling, URL validation, timeout protection, and graceful failure modes. Type safety is maintained throughout with appropriate exports and integrations. The PR author has provided comprehensive testing evidence (5405 tests passing, lint/format clean).
- No files require special attention
<sub>Last reviewed commit: 03684ae</sub>
<!-- greptile_other_comments_section -->
<sub>(4/5) You can add custom instructions or style guidelines for the agent [here](https://app.greptile.com/review/github)!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#18587: fix(ollama): improve timeout handling and cooldown logic for local ...
by manthis · 2026-02-16
81.7%
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
80.9%
#4782: fix: Auto-discover Ollama models without requiring explicit API key
by spiceoogway · 2026-01-30
80.6%
#7432: Comprehensive Ollama Support PR
by charlieduzstuf · 2026-02-02
77.2%
#21977: Preserve provider API for discovered Ollama models
by graysurf · 2026-02-20
77.1%
#23776: feat(ollama): auto-register Ollama provider with placeholder key an...
by jayy-77 · 2026-02-22
76.4%
#11877: feat(ollama): auto-detect vision capability via /api/show
by Nina-VanKhan · 2026-02-08
76.3%
#22569: [Feature] add provider wizard
by antonidasyang · 2026-02-21
74.3%
#16098: fix: omit tools param for models without tool support, surface erro...
by claw-sylphx · 2026-02-14
73.6%
#11875: fix(ollama): accept /model directive for configured providers
by Nina-VanKhan · 2026-02-08
73.3%