← Back to PRs

#21199: Models: suppress repeated vLLM/Ollama discovery warnings (#21037)

by itsishant open 2026-02-19 19:32 View on GitHub →
agents size: S
## Problem When users enter `openclaw config` → models → vLLM and exit with Ctrl+C, partial configuration gets saved (auth profile is created). This causes "Failed to discover vLLM models: TypeError: fetch failed" warnings to spam the console on every openclaw operation. ## Solution - **Cache failed discovery attempts** per session to avoid repeated warnings - **Improved error messages** with: - Specific URL that failed - Actionable removal command: `openclaw config unset models.providers.vllm` - **Auto-recovery**: Cache clears automatically on successful discovery - **Applied to both Ollama and vLLM** providers for consistency ## Changes - Added `vllmDiscoveryFailureCache` and `ollamaDiscoveryFailureCache` Sets - Modified `discoverVllmModels()` and `discoverOllamaModels()` to check cache before warning - Enhanced error messages with URLs and removal commands - Clear cache on successful discovery ## Testing - Warnings now appear only once per session - Users can remove config with: `openclaw config unset models.providers.vllm` - Pre-existing functionality unchanged ## References Fixes #21037 <!-- greptile_comment --> <h3>Greptile Summary</h3> Added per-session caching for vLLM and Ollama discovery failures to suppress repeated console warnings. Major changes: - Introduced `ollamaDiscoveryFailureCache` and `vllmDiscoveryFailureCache` Sets to track failed discovery attempts - Enhanced error messages with specific URLs and actionable removal commands (`openclaw config unset models.providers.{vllm,ollama}`) - Implemented auto-recovery by clearing cache on successful discovery - Applied consistent caching pattern across both Ollama and vLLM providers The implementation correctly addresses the issue where partial config (created by early exit from `openclaw config`) caused repeated warnings on every operation. Cache keys are appropriately scoped (URL for Ollama, URL+apiKey for vLLM) and the solution maintains backward compatibility. <h3>Confidence Score: 5/5</h3> - This PR is safe to merge with minimal risk - The implementation is well-designed with consistent patterns across both providers, proper cache key scoping, and auto-recovery on success. The changes are isolated to discovery functions with no breaking changes to APIs or existing behavior. The caching mechanism is simple and effective for the stated goal. - No files require special attention <sub>Last reviewed commit: 9cc3978</sub> <!-- greptile_other_comments_section --> <sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub> <!-- /greptile_comment -->

Most Similar PRs