← Back to PRs

#21735: fix(hooks): pass configured model to slug generator (#21650)

by lailoo open 2026-02-20 09:21 View on GitHub →
size: S experienced-contributor
## Summary - **Bug**: `generateSlugViaLLM` ignores user-configured model, always falls back to hardcoded `anthropic/claude-opus-4-6` - **Root cause**: `runEmbeddedPiAgent()` called without `provider`/`model` params in `src/hooks/llm-slug-generator.ts` - **Fix**: resolve configured model via `resolveDefaultModelForAgent()` and pass it through Fixes #21650 ## Problem `generateSlugViaLLM()` calls `runEmbeddedPiAgent()` without passing `provider` or `model`. The embedded runner defaults to `DEFAULT_PROVIDER="anthropic"` / `DEFAULT_MODEL="claude-opus-4-6"` (from `src/agents/defaults.ts`), ignoring whatever the user configured in `agents.defaults.model.primary`. **Before fix:** ``` Config: agents.defaults.model.primary = "openai-codex/gpt-5.3-codex" Actual: runEmbeddedPiAgent called with provider=undefined, model=undefined Result: Falls back to anthropic/claude-opus-4-6 ``` ## Changes - `src/hooks/llm-slug-generator.ts` — import `resolveDefaultModelForAgent`, call it to resolve the configured model, and pass `provider`/`model` to `runEmbeddedPiAgent()` - `src/hooks/llm-slug-generator.model-selection.test.ts` — 3 regression tests covering configured non-anthropic model, configured anthropic model, and empty-config fallback - `CHANGELOG.md` — add fix entry under 2026.2.20 **After fix:** ``` Config: agents.defaults.model.primary = "openai-codex/gpt-5.3-codex" Actual: runEmbeddedPiAgent called with provider="openai-codex", model="gpt-5.3-codex" ``` ## Test plan - [x] New test: configured openai-codex model passes through to runEmbeddedPiAgent - [x] New test: configured anthropic model passes through correctly - [x] New test: empty config falls back to anthropic/claude-opus-4-6 defaults (expected behavior) - [x] All 3 tests pass - [x] Lint passes (0 errors) - [x] Format check passes ## E2E verification (DashScope/Qwen) Verified the full resolution chain with a real DashScope API call using `qwen3.5-plus`: ``` ── Step 1: Model resolution ── provider: dashscope model: qwen3.5-plus ✅ PASS — resolves to dashscope/qwen3.5-plus (not anthropic defaults) ── Step 2: Direct DashScope API call ── model used: qwen3.5-plus raw slug: rate-limit cleaned: rate-limit ✅ PASS — Qwen returned a valid slug ``` Config used: ```json { "models": { "providers": { "dashscope": { "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1", "api": "openai-completions", "models": [{ "id": "qwen3.5-plus", "contextWindow": 131072 }] } } }, "agents": { "defaults": { "model": { "primary": "dashscope/qwen3.5-plus" } } } } ``` This confirms: 1. `resolveDefaultModelForAgent` correctly resolves `dashscope/qwen3.5-plus` from config 2. DashScope API responds to the slug prompt with a valid slug (`rate-limit`) 3. Unit tests verify these resolved params reach `runEmbeddedPiAgent` ## Bug reproduction on main ``` ── main branch ── provider passed: undefined model passed: undefined ❌ BUG CONFIRMED — provider/model are undefined, will default to anthropic/claude-opus-4-6 ``` ## Effect on User Experience **Before:** Slug generation always uses anthropic/claude-opus-4-6 regardless of user config, causing failures or unexpected billing for users on non-Anthropic providers. **After:** Slug generation respects the user's configured model/provider, consistent with all other agent operations.

Most Similar PRs