#3678: TUI: fix /reasoning by supporting stream + normalization-Mark as AI-assisted-lightly tested
Cluster:
UI Enhancements and Fixes
This change fixes a mismatch between the TUI /reasoning command and the project’s canonical reasoning enum.
What was wrong before
The core enum is ReasoningLevel = "off" | "on" | "stream" ([thinking.ts](https://file+.vscode-resource.vscode-cdn.net/Users/aaron/.vscode/extensions/openai.chatgpt-0.4.67-darwin-arm64/webview/#)), and the Gateway session patch logic validates the same set (on|off|stream).
The TUI, however:
Only advertised /reasoning <on|off> (help text).
Only offered on/off in autocomplete.
Passed the raw argument through to patchSession without normalization/validation.
What this fixes
Adds stream to TUI autocomplete for /reasoning.
Updates /reasoning help text to include stream.
Normalizes user input via normalizeReasoningLevel() so common inputs like streaming become stream before calling patchSession.
If you didn’t make this change, when would it fail and how
If a user tries to enable streaming reasoning in TUI (e.g. /reasoning stream or /reasoning streaming):
The TUI would either not guide them to the correct value (no autocomplete, wrong usage).
For /reasoning streaming specifically, the TUI would send reasoningLevel: "streaming" to the Gateway, which rejects it as invalid (Gateway expects exactly on|off|stream).
Result: the command fails (you’d see a “reasoning failed: …” message in the TUI), and the session would not enter reasoning:stream, so the user cannot enable streaming reasoning from TUI reliably.
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR updates the TUI `/reasoning` command to match the canonical `ReasoningLevel` enum (`off|on|stream`). It adds `stream` to `/reasoning` autocomplete, updates the command description, and normalizes user input via `normalizeReasoningLevel()` before calling `client.patchSession`, with tests covering autocomplete and normalization behavior.
The main integration point is `src/tui/tui-command-handlers.ts`, where slash command arguments are parsed and translated into gateway `patchSession` calls; the TUI’s slash command registry in `src/tui/commands.ts` drives help/autocomplete UI.
<h3>Confidence Score: 4/5</h3>
- This PR is low risk and mostly safe to merge; the main remaining issue is a minor documentation mismatch in `/help` output.
- Changes are localized to TUI command wiring and add straightforward normalization using an existing helper (`normalizeReasoningLevel`). New tests cover autocomplete and normalization. I couldn’t run the test/lint/typecheck commands in this environment (npm unavailable), so confidence is slightly reduced.
- src/tui/commands.ts (helpText line for /reasoning)
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#12079: TUI: improve thinking UX, tool readability, and live running status...
by rubenfb23 · 2026-02-08
79.2%
#18187: fix: tool summaries silently dropped when reasoningLevel is stream
by ayanesakura · 2026-02-16
76.3%
#4495: Fix: emit final assistant event when reply tags hide stream
by ukeate · 2026-01-30
75.9%
#11109: fix(tui): prefer config contextTokens over persisted session value
by marezgui · 2026-02-07
75.5%
#9220: Fix: TUI drops API responses silently when runID already finalized
by vishaltandale00 · 2026-02-05
75.4%
#20516: fix(tui): preserve streamed text on finalize for pure text responses
by MisterGuy420 · 2026-02-19
75.3%
#16733: fix(ui): avoid injected newlines when tool output is hidden
by jp117 · 2026-02-15
75.2%
#13235: feat: stream reasoning_content via /v1/chat/completions SSE
by mode80 · 2026-02-10
74.8%
#10998: fix(agents): pass session thinking/reasoning levels to session_stat...
by wony2 · 2026-02-07
74.0%
#8083: fix(tui): update model status immediately after /model command
by rohanjangala · 2026-02-03
73.9%