#7987: feat: Support iflow/GLM-4.6 reasoning_content and tokens
agents
stale
Cluster:
Model Provider Integrations
This PR adds support for the non-standard 'reasoning_content' field used by iflow (Alibaba) API for GLM-4.6 models.
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR adds helper types/functions to handle non-standard OpenAI-compatible responses that include iflow/GLM-4.6’s `reasoning_content` field, and extends usage normalization to capture reasoning token counts. It also updates provider classification to treat iflow as a “reasoning tag provider”.
The main things to double-check are: (1) whether iflow should be forced into `<think>/<final>` tag mode now that native `reasoning_content` support is being introduced (these approaches can conflict), and (2) whether the new `reasoning` usage field is actually retained everywhere it should be (notably `hasNonzeroUsage()` currently ignores it).
<h3>Confidence Score: 3/5</h3>
- This PR is likely safe to merge, but a couple logic edges could lead to incorrect behavior for iflow reasoning handling and missing usage reporting.
- Changes are small and localized, but `isReasoningTagProvider()` classifying iflow as tag-based may conflict with native `reasoning_content` support, and `hasNonzeroUsage()` not considering `reasoning` can suppress recording of reasoning-only usage. Additionally, the new mapping helpers are exported but currently unused in-repo, so the feature may be incomplete unless wired up elsewhere.
- src/utils/provider-utils.ts; src/agents/usage.ts; src/agents/pi-embedded-helpers/openai.ts
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
**Context used:**
- Context from `dashboard` - CLAUDE.md ([source](https://app.greptile.com/review/custom-context?memory=fd949e91-5c3a-4ab5-90a1-cbe184fd6ce8))
- Context from `dashboard` - AGENTS.md ([source](https://app.greptile.com/review/custom-context?memory=0d0c8278-ef8e-4d6c-ab21-f5527e322f13))
<!-- /greptile_comment -->
Most Similar PRs
#6559: Fix LiteLLM reasoning-tag handling + fallback to <think> content
by Najia-afk · 2026-02-01
79.9%
#13235: feat: stream reasoning_content via /v1/chat/completions SSE
by mode80 · 2026-02-10
74.1%
#7051: Add io-intelligence model inference provider
by rajagurunath · 2026-02-02
73.4%
#10097: fix: add empty thinking blocks to tool call messages when thinking is…
by cyxer000 · 2026-02-06
72.8%
#8135: Adding Groq as a model provider
by FameDied · 2026-02-03
72.6%
#13295: feat: add Eternal AI provider integration
by peterparkernho · 2026-02-10
72.4%
#7568: feat(agents): add LM Studio auto-discovery and provider support
by sjseo298 · 2026-02-03
72.1%
#20965: feat: Add comprehensive model configuration and discovery for various…
by rodeok · 2026-02-19
71.7%
#13006: fix(provider): disable reasoning tags for gemini-3-pro variants to ...
by whyuds · 2026-02-10
71.5%
#22797: Feat/auto thinking mode
by jrthib · 2026-02-21
71.4%