#11876: fix(ollama): don't auto-enable enforceFinalTag for Ollama models
stale
Cluster:
Ollama Model Enhancements
## Summary
`isReasoningTagProvider("ollama")` returns true for ALL Ollama models, which auto-enabled `enforceFinalTag`. This requires LLM output to be wrapped in `<final>` tags.
However, Ollama models (including reasoning models like deepseek-r1) use `<think>` tags for reasoning but output the final response as **plain text** - they do NOT wrap output in `<final>` tags. With `enforceFinalTag` enabled, `stripBlockTags` returned an empty string since no `<final>` tags were found, resulting in silent empty responses.
## Changes
- Removed automatic `enforceFinalTag: true` based on provider in `get-reply-run.ts`
- Changed `resolveEnforceFinalTag` in `agent-runner-utils.ts` to only honor explicit `run.enforceFinalTag` configuration
- The `<think>` tag stripping still works correctly without `enforceFinalTag`
## Test Plan
1. Configure an Ollama model (e.g., `gemma3`)
2. Send a message via the dashboard chat
3. Verify the response is displayed correctly (not empty)
🤖 Generated with [Claude Code](https://claude.ai/code)
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This change removes provider-based auto-enabling of `enforceFinalTag` for embedded Pi runs.
- In `src/auto-reply/reply/get-reply-run.ts`, the code no longer injects `enforceFinalTag: true` when `isReasoningTagProvider(provider)` is true.
- In `src/auto-reply/reply/agent-runner-utils.ts`, `resolveEnforceFinalTag` now only returns `true` when `run.enforceFinalTag` is explicitly set, rather than inferring it from the provider.
This is intended to prevent Ollama outputs from being stripped to an empty string when models emit `<think>` but do not wrap final output in `<final>` tags.
<h3>Confidence Score: 2/5</h3>
- This PR is not safe to merge as-is due to a behavior change that likely breaks existing reasoning-tag provider expectations and associated tests.
- The change removes all provider-based enforcement of `enforceFinalTag`, but the repository includes tests asserting that fallback reasoning-tag providers must run with `enforceFinalTag: true`. Unless call sites are updated to set `run.enforceFinalTag` for those providers, this alters behavior and will likely cause test failures or incorrect output parsing for providers that rely on `<final>` tags.
- src/auto-reply/reply/agent-runner-utils.ts, src/auto-reply/reply/get-reply-run.ts, and any tests/logic relying on isReasoningTagProvider-based enforcement (e.g. src/auto-reply/reply/agent-runner.reasoning-tags.test.ts)
<!-- greptile_other_comments_section -->
**Context used:**
- Context from `dashboard` - CLAUDE.md ([source](https://app.greptile.com/review/custom-context?memory=fd949e91-5c3a-4ab5-90a1-cbe184fd6ce8))
- Context from `dashboard` - AGENTS.md ([source](https://app.greptile.com/review/custom-context?memory=0d0c8278-ef8e-4d6c-ab21-f5527e322f13))
<!-- /greptile_comment -->
Most Similar PRs
#6559: Fix LiteLLM reasoning-tag handling + fallback to <think> content
by Najia-afk · 2026-02-01
77.5%
#4782: fix: Auto-discover Ollama models without requiring explicit API key
by spiceoogway · 2026-01-30
76.2%
#5115: fix: guard against undefined model.name in Ollama discovery (#5062)
by TheWildHustle · 2026-01-31
74.5%
#10430: fix: remove Minimax from isReasoningTagProvider
by echoedinvoker · 2026-02-06
74.5%
#16098: fix: omit tools param for models without tool support, surface erro...
by claw-sylphx · 2026-02-14
74.0%
#11875: fix(ollama): accept /model directive for configured providers
by Nina-VanKhan · 2026-02-08
72.8%
#13006: fix(provider): disable reasoning tags for gemini-3-pro variants to ...
by whyuds · 2026-02-10
72.7%
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
72.5%
#17304: feat(gemini): robust handling for non-XML reasoning headers (`Think...
by YoshiaKefasu · 2026-02-15
71.7%
#14323: feat: add forcePrependThinkTag option for reasoning models
by LisaMacintosh · 2026-02-11
71.4%