#21735: fix(hooks): pass configured model to slug generator (#21650)
size: S
experienced-contributor
Cluster:
Model Configuration Enhancements
## Summary
- **Bug**: `generateSlugViaLLM` ignores user-configured model, always falls back to hardcoded `anthropic/claude-opus-4-6`
- **Root cause**: `runEmbeddedPiAgent()` called without `provider`/`model` params in `src/hooks/llm-slug-generator.ts`
- **Fix**: resolve configured model via `resolveDefaultModelForAgent()` and pass it through
Fixes #21650
## Problem
`generateSlugViaLLM()` calls `runEmbeddedPiAgent()` without passing `provider` or `model`. The embedded runner defaults to `DEFAULT_PROVIDER="anthropic"` / `DEFAULT_MODEL="claude-opus-4-6"` (from `src/agents/defaults.ts`), ignoring whatever the user configured in `agents.defaults.model.primary`.
**Before fix:**
```
Config: agents.defaults.model.primary = "openai-codex/gpt-5.3-codex"
Actual: runEmbeddedPiAgent called with provider=undefined, model=undefined
Result: Falls back to anthropic/claude-opus-4-6
```
## Changes
- `src/hooks/llm-slug-generator.ts` — import `resolveDefaultModelForAgent`, call it to resolve the configured model, and pass `provider`/`model` to `runEmbeddedPiAgent()`
- `src/hooks/llm-slug-generator.model-selection.test.ts` — 3 regression tests covering configured non-anthropic model, configured anthropic model, and empty-config fallback
- `CHANGELOG.md` — add fix entry under 2026.2.20
**After fix:**
```
Config: agents.defaults.model.primary = "openai-codex/gpt-5.3-codex"
Actual: runEmbeddedPiAgent called with provider="openai-codex", model="gpt-5.3-codex"
```
## Test plan
- [x] New test: configured openai-codex model passes through to runEmbeddedPiAgent
- [x] New test: configured anthropic model passes through correctly
- [x] New test: empty config falls back to anthropic/claude-opus-4-6 defaults (expected behavior)
- [x] All 3 tests pass
- [x] Lint passes (0 errors)
- [x] Format check passes
## E2E verification (DashScope/Qwen)
Verified the full resolution chain with a real DashScope API call using `qwen3.5-plus`:
```
── Step 1: Model resolution ──
provider: dashscope
model: qwen3.5-plus
✅ PASS — resolves to dashscope/qwen3.5-plus (not anthropic defaults)
── Step 2: Direct DashScope API call ──
model used: qwen3.5-plus
raw slug: rate-limit
cleaned: rate-limit
✅ PASS — Qwen returned a valid slug
```
Config used:
```json
{
"models": {
"providers": {
"dashscope": {
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"api": "openai-completions",
"models": [{ "id": "qwen3.5-plus", "contextWindow": 131072 }]
}
}
},
"agents": {
"defaults": {
"model": { "primary": "dashscope/qwen3.5-plus" }
}
}
}
```
This confirms:
1. `resolveDefaultModelForAgent` correctly resolves `dashscope/qwen3.5-plus` from config
2. DashScope API responds to the slug prompt with a valid slug (`rate-limit`)
3. Unit tests verify these resolved params reach `runEmbeddedPiAgent`
## Bug reproduction on main
```
── main branch ──
provider passed: undefined
model passed: undefined
❌ BUG CONFIRMED — provider/model are undefined, will default to anthropic/claude-opus-4-6
```
## Effect on User Experience
**Before:** Slug generation always uses anthropic/claude-opus-4-6 regardless of user config, causing failures or unexpected billing for users on non-Anthropic providers.
**After:** Slug generation respects the user's configured model/provider, consistent with all other agent operations.
Most Similar PRs
#14464: fix(hooks): use configured primary model for slug generation (#14272)
by lailoo · 2026-02-12
85.6%
#23286: fix: use configured model in llm-slug-generator instead of hardcoded …
by wsman · 2026-02-22
82.4%
#15574: fix(hooks): use configured model for llm slug generation (#15510)
by TsekaLuk · 2026-02-13
82.2%
#9080: Fix: Use configured model for memory file slug generation
by vishaltandale00 · 2026-02-04
82.0%
#18867: fix: route slug generator LLM call through configured provider
by Celegormhenry · 2026-02-17
81.1%
#5945: fix: use configured model for slug generator (AI-assisted)
by HEDELKA · 2026-02-01
80.8%
#4793: hooks: use configured model for slug generator
by yoyooyooo · 2026-01-30
78.4%
#13401: fix: slug generator should use agent's primary model instead of har...
by pahud · 2026-02-10
76.8%
#16838: fix: include configured fallbacks in model allowlist
by taw0002 · 2026-02-15
72.7%
#6673: fix: preserve allowAny flag in createModelSelectionState for custom...
by tenor0 · 2026-02-01
71.9%