#20426: feat: make llm_input/llm_output modifying hooks for middleware patterns
agents
size: S
Cluster:
Plugin and Hook Enhancements
## Summary
Converts `llm_input` and `llm_output` from void (observational) hooks to **modifying hooks** that can transform data flowing to/from LLM providers. Fully backward compatible — existing plugins returning void continue to work unchanged.
## Motivation
The current plugin system has modifying hooks for messages (`message_sending`), prompts (`before_prompt_build`), and tools (`before_tool_call`), but the LLM input/output hooks are void-only. This blocks an entire class of middleware plugins.
Concrete use case: [pii-redactor](https://github.com/chandika/pii-redactor) — a client-side PII anonymization plugin that needs to:
1. Redact personal info in the prompt before it reaches the provider
2. Rehydrate tokens in the response before it reaches the user
See #20416 for the full feature request and discussion.
## Changes
### `src/plugins/types.ts`
- Add `PluginHookLlmInputResult` type: `{ prompt?, systemPrompt? }`
- Add `PluginHookLlmOutputResult` type: `{ assistantTexts? }`
- Update `PluginHookHandlerMap` so `llm_input`/`llm_output` handlers can return results
### `src/plugins/hooks.ts`
- Import new result types
- Switch `runLlmInput` from `runVoidHook` → `runModifyingHook` with merge function
- Switch `runLlmOutput` from `runVoidHook` → `runModifyingHook` with merge function
### `src/agents/pi-embedded-runner/run/attempt.ts`
- `llm_input`: Await the hook result, apply `prompt` modification if returned
- `llm_output`: Await the hook result, apply `assistantTexts` modification if returned
- Change `assistantTexts` binding from `const` to `let` to allow mutation
### `src/plugins/wired-hooks-llm.test.ts`
- Add tests for modifying behavior (prompt rewrite, assistantTexts rewrite)
- Add backward compatibility tests (void returns → undefined result)
## Backward compatibility
Existing `llm_input`/`llm_output` hooks that return `void` continue to work. The `runModifyingHook` function already handles void returns gracefully (no-op merge, returns `undefined`).
**Breaking change: `llm_input` is now `await`ed instead of fire-and-forget.** This is intentional — modifying hooks must complete before the prompt is sent. Plugins that do slow async work in `llm_input` should be aware of this. The same applies to `llm_output`.
## Use cases this unlocks
- **PII redaction** — strip personal info before any provider sees it
- **Content filtering** — enforce topic boundaries, block harmful content
- **Guardrails** — provider-agnostic safety (vs provider-specific like Bedrock #9748)
- **Translation** — rewrite prompts/responses for multilingual support
- **Token optimization** — compress/summarize before sending
- **Audit** — capture exact sent vs received payloads
Closes #20416
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Converts `llm_input` and `llm_output` from void (observational) hooks to modifying hooks, allowing plugins to transform data flowing to/from LLM providers. The hook runner changes in `hooks.ts` and type additions in `types.ts` are clean and follow existing patterns. Backward compatibility is maintained — void-returning handlers continue to work.
- **Issue found**: The `systemPrompt` field in `PluginHookLlmInputResult` is declared as modifiable and properly merged in the hook runner, but never applied in `attempt.ts`. A plugin returning `{ systemPrompt: "..." }` will have its modification silently ignored.
- The `prompt` modification from `llm_input` and `assistantTexts` modification from `llm_output` are correctly wired up.
- Behavioral change: both hooks are now `await`ed instead of fire-and-forget, which is intentional but adds latency to the critical path. This is documented in the PR description.
- Test coverage is good for the hook runner layer, with both modifying and backward-compatible void cases tested.
<h3>Confidence Score: 3/5</h3>
- Mostly safe but contains a dead code path that creates a misleading plugin API
- The core mechanism works correctly for prompt and assistantTexts modification, and backward compatibility is preserved. However, the systemPrompt field in PluginHookLlmInputResult is advertised as functional but never applied in the call site, which would mislead plugin developers. The hook runner and type layers are clean. Score of 3 reflects the functional gap between the declared API and actual behavior.
- src/agents/pi-embedded-runner/run/attempt.ts — the systemPrompt result from llm_input hook is never applied
<sub>Last reviewed commit: 6bfd447</sub>
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#20802: feat(hooks): upgrade llm_input, llm_output, and after_tool_call to ...
by eilon-onyx · 2026-02-19
87.4%
#11124: feat(plugins): add before_llm_request hook for custom LLM headers
by johnlanni · 2026-02-07
81.7%
#14602: fix(plugins): hook systemPrompt gets collected then thrown away (#1...
by yinghaosang · 2026-02-12
79.5%
#22624: feat(plugins): add before_context_send hook and model routing via b...
by davidrudduck · 2026-02-21
78.7%
#6017: feat(hooks): add systemPrompt and tools to before_agent_start event
by yajatns · 2026-02-01
78.1%
#23559: feat(plugins): add before_context_send hook and model routing via b...
by davidrudduck · 2026-02-22
77.7%
#20067: feat(plugins): add before_agent_reply hook for message interception
by JoshuaLelon · 2026-02-18
77.3%
#11921: feat(hooks): support systemPrompt injection in before_agent_start hook
by jungdaesuh · 2026-02-08
77.1%
#6405: feat(security): Add HTTP API security hooks for plugin scanning
by masterfung · 2026-02-01
75.8%
#11732: feat(plugins): add injectMessages to before_agent_start hook
by antra-tess · 2026-02-08
75.4%