#22107: docs: add local providers troubleshooting guide
docs
size: XS
Cluster:
Model Provider Integrations
Users running Ollama, vLLM, or other local models hit the same issues repeatedly (#22055, #4544, #1695): confusing API key errors, model discovery failures, and session corruption from malformed responses.
There's no single page covering these common pitfalls across local providers. The individual provider docs have minimal troubleshooting sections.
**Adds** `docs/providers/local-troubleshooting.md` covering:
- "No API key found" (dummy key requirement)
- "Unknown model" (discovery prerequisites)
- No tool calling (model capability check)
- Connection refused (server not running)
- Slow responses/timeouts (hardware limits)
- Session corruption after model errors
- Mixed local + cloud fallback setup
Also adds a link from the providers index page.
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Adds a new `docs/providers/local-troubleshooting.md` page consolidating common issues when running OpenClaw with local LLM providers (Ollama, vLLM). Covers API key requirements, model discovery, tool calling, connection issues, performance, session corruption, and mixed local/cloud setups. Also adds a link from the providers index page.
- Content is accurate and consistent with the existing Ollama and vLLM provider docs
- All internal links and anchors resolve correctly (verified against `ollama.md`, `vllm.md`, and `gateway/troubleshooting.md`)
- Frontmatter follows the established `summary`/`read_when`/`title` pattern
- The page is not added to `docs.json` sidebar navigation, which follows the existing pattern (several providers like Ollama, vLLM, Venice, NVIDIA are also only linked from the index page)
- Minor style note: the troubleshooting link is inserted mid-list in the provider index between vLLM and Qianfan, which slightly disrupts the provider catalog flow
<h3>Confidence Score: 4/5</h3>
- This PR is safe to merge — it only adds documentation with no code changes.
- Documentation-only PR adding a well-structured troubleshooting guide. Content is accurate and links are valid. Minor placement concern in the index page is non-blocking.
- No files require special attention. The link placement in `docs/providers/index.md` is a minor style suggestion.
<sub>Last reviewed commit: cf18a52</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#11525: docs: Add LiteLLM provider documentation
by shin-bot-litellm · 2026-02-07
79.8%
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
79.2%
#6484: docs: add LiteLLM + Nebius integration guide
by demianarc · 2026-02-01
79.1%
#4782: fix: Auto-discover Ollama models without requiring explicit API key
by spiceoogway · 2026-01-30
79.0%
#18587: fix(ollama): improve timeout handling and cooldown logic for local ...
by manthis · 2026-02-16
77.3%
#22069: fix(agents): add provider-specific hints for local model auth errors
by pierreeurope · 2026-02-20
76.8%
#5469: docs: add Portkey AI gateway integration
by vrushankportkey · 2026-01-31
76.0%
#5195: update expose ollama to network settings in docs
by MorganMarshall · 2026-01-31
75.3%
#12707: docs: Add BlockRun provider (smart LLM routing + pay-per-request)
by 1bcMax · 2026-02-09
75.0%
#19307: docs: add Google (Gemini) provider documentation
by manueltarouca · 2026-02-17
73.8%