← Back to PRs

#16098: fix: omit tools param for models without tool support, surface errors (#15702)

by claw-sylphx open 2026-02-14 08:28 View on GitHub →
agents stale size: L
## Problem When using local Ollama models that don't support tool/function calling, OpenClaw fails silently with `(no output)`. Direct curl to Ollama works fine. The issue is that OpenClaw sends tool definitions to models that can't handle them, and errors are swallowed. Closes #15702 ## Changes ### 1. Add `compat.supportsTools` config flag (`src/config/`) - Added `supportsTools?: boolean` to `ModelCompatConfig` type and Zod schema - When set to `false`, tools are omitted from API requests to the model - Defaults to `true` (existing behavior preserved) ### 2. Skip tools when model lacks support (`src/agents/pi-embedded-runner/run/attempt.ts`) - Check `model.compat.supportsTools` before creating tool definitions - When `supportsTools === false`, tools are omitted (same as `disableTools`) - Logs an info message when tools are skipped for debugging ### 3. Surface Ollama errors (`src/agents/ollama-stream.ts`) - Detect and surface Ollama-level `error` field in streamed responses (e.g. model doesn't support tools) - Detect empty responses when tools were sent and surface a helpful error message suggesting `compat.supportsTools=false` - Added `error?` field to `OllamaChatResponse` interface ### 4. Tests (`src/agents/ollama-stream.test.ts`) - Test: Ollama error field is surfaced from streamed response - Test: empty response with tools triggers helpful error message - Test: empty response without tools does NOT trigger error (normal behavior) ## Usage For Ollama models that don't support tools, add `supportsTools: false` to the model's compat config: ```json { "models": { "providers": { "my-ollama": { "baseUrl": "http://127.0.0.1:11434", "api": "ollama", "models": [{ "id": "tinyllama", "name": "TinyLlama", "compat": { "supportsTools": false } }] } } } } ``` <!-- greptile_comment --> <h3>Greptile Summary</h3> This PR addresses issue #15702 where Ollama models without tool/function calling support fail silently with `(no output)`. It makes two main changes: - Adds a `compat.supportsTools` config flag that, when set to `false`, omits tools from API requests to the model. This is checked in `attempt.ts` alongside the existing `disableTools` flag, and the type + Zod schema are updated accordingly. - Surfaces Ollama-level errors that were previously swallowed: (1) detects the `error` field in streamed NDJSON responses, and (2) detects empty responses when tools were sent, providing a helpful error message pointing users to the `compat.supportsTools=false` config. Additionally, this PR changes `sanitizeAntigravityThinkingBlocks` to convert unsigned thinking blocks to text blocks instead of dropping them entirely, preserving reasoning content. This is a separate behavioral fix bundled into the same commit (from #15681). Test coverage is thorough across all changes, with both unit tests and e2e integration tests. <h3>Confidence Score: 4/5</h3> - This PR is safe to merge with minimal risk — changes are well-scoped, defensive, and backward-compatible. - The config flag addition is backward-compatible (defaults to true). The Ollama error surfacing is a clear improvement over silent failures. The antigravity thinking block change preserves content that was previously lost. All changes have thorough test coverage. The `Record<string, unknown>` cast in attempt.ts is pragmatic given the external SDK type. No critical issues found. - No files require special attention. The `src/agents/ollama-stream.ts` empty-response heuristic has a minor edge case with whitespace-only responses (already discussed in previous review threads), but this is unlikely to matter in practice. <sub>Last reviewed commit: 3975256</sub> <!-- greptile_other_comments_section --> <!-- /greptile_comment -->

Most Similar PRs