#6559: Fix LiteLLM reasoning-tag handling + fallback to <think> content
scripts
Cluster:
Model Provider Integrations
ummary
Treat litellm as a reasoning-tag provider.
If a model emits only <think>...</think> with no <final>, fall back to the <think> content.
Why
LiteLLM-proxied models (e.g., Qwen3 via Ollama) often wrap the entire response in <think> without emitting <final>. OpenClaw strips <think> content and returns empty output. This change restores visible replies while preserving existing <final> behavior.
Changes
provider-utils: add litellm to reasoning tag providers.
reasoning-tags: when no <final> exists and stripped output is empty, return the content inside <think> (also handles unclosed <think>).
tests added for think-only output.
Testing
Added unit tests for think-only and unclosed-think inputs.
Verified in a local deployment with LiteLLM + Qwen3 (payloads no longer empty).
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR updates reasoning-tag handling so LiteLLM is treated as a “reasoning tag provider” and adjusts `stripReasoningTagsFromText` to fall back to returning `<think>` content when there is no `<final>` and stripping would otherwise yield an empty output. It also adds unit coverage for “think-only” and “unclosed think” responses.
The change fits into the existing text sanitation pipeline by ensuring provider-specific tag-wrapped outputs don’t result in empty user-visible responses, while keeping the current behavior of stripping tags when `<final>` is present.
<h3>Confidence Score: 2/5</h3>
- This PR is risky to merge as-is because it can expose `<think>` (hidden reasoning) content to end users in common inputs.
- The new fallback logic returns content from closed `<think>...</think>` blocks whenever there is no `<final>` and the stripped output is empty, which can leak reasoning text rather than only rescuing malformed/unclosed tag cases. The LiteLLM provider detection is also likely too strict if provider strings include prefixes/suffixes.
- src/shared/text/reasoning-tags.ts, src/utils/provider-utils.ts
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
**Context used:**
- Context from `dashboard` - CLAUDE.md ([source](https://app.greptile.com/review/custom-context?memory=fd949e91-5c3a-4ab5-90a1-cbe184fd6ce8))
- Context from `dashboard` - AGENTS.md ([source](https://app.greptile.com/review/custom-context?memory=0d0c8278-ef8e-4d6c-ab21-f5527e322f13))
<!-- /greptile_comment -->
Most Similar PRs
#7987: feat: Support iflow/GLM-4.6 reasoning_content and tokens
by EisonMe · 2026-02-03
79.9%
#15606: LLM Task: add explicit thinking level wiring
by xadenryan · 2026-02-13
78.9%
#17304: feat(gemini): robust handling for non-XML reasoning headers (`Think...
by YoshiaKefasu · 2026-02-15
77.8%
#11876: fix(ollama): don't auto-enable enforceFinalTag for Ollama models
by Nina-VanKhan · 2026-02-08
77.5%
#22797: Feat/auto thinking mode
by jrthib · 2026-02-21
77.4%
#17455: fix: strip content before orphan closing think tags
by jwt625 · 2026-02-15
77.0%
#21182: feat(litellm): enhance LiteLLM provider with model discovery and pr...
by hiboute · 2026-02-19
76.4%
#10430: fix: remove Minimax from isReasoningTagProvider
by echoedinvoker · 2026-02-06
75.4%
#6685: fix: suppress thinking leak for Synthetic reasoning models
by AkiLetschne · 2026-02-01
75.1%
#10097: fix: add empty thinking blocks to tool call messages when thinking is…
by cyxer000 · 2026-02-06
75.0%