#21699: feat(agents): add opt-in OpenAI payload logging for embedded runs
agents
size: M
Cluster:
AI Provider Enhancements
## Summary
- Problem: Embedded runs lacked a dedicated persisted log for OpenAI request payloads and usage.
- Why it matters: Debugging payload shape/cost behavior is hard without reproducible traces.
- What changed:
- Added opt-in OpenAI payload logger: `src/agents/openai-payload-log.ts`
- Wired it into embedded runs: `src/agents/pi-embedded-runner/run/attempt.ts`
- Added tests: `src/agents/openai-payload-log.test.ts`
- What did NOT change (scope boundary):
- No default behavior change (feature is off by default)
- No new network calls or tool execution permissions
AI-assisted: Yes (human-reviewed).
Testing level: Lightly tested.
## Change Type (select all)
- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Docs
- [ ] Security hardening
- [ ] Chore/infra
## Scope (select all touched areas)
- [x] Gateway / orchestration
- [ ] Skills / tool execution
- [ ] Auth / tokens
- [ ] Memory / storage
- [x] Integrations
- [ ] API / contracts
- [ ] UI / DX
- [ ] CI/CD / infra
## Linked Issue/PR
- Closes #
- Related #
## User-visible / Behavior Changes
None by default.
If `OPENCLAW_OPENAI_PAYLOAD_LOG=true`, OpenAI request/usage events are written to JSONL (path overridable via `OPENCLAW_OPENAI_PAYLOAD_LOG_FILE`).
## Security Impact (required)
- New permissions/capabilities? (`No`)
- Secrets/tokens handling changed? (`No`)
- New/changed network calls? (`No`)
- Command/tool execution surface changed? (`No`)
- Data access scope changed? (`Yes`)
- If any `Yes`, explain risk + mitigation:
Prompt payload content can now be persisted to local disk when logging is enabled.
Mitigation: feature is explicitly opt-in, off by default, OpenAI-scoped, local-file only, path is configurable.
## Repro + Verification
### Environment
- OS: Amazon Linux 2023 (x86_64)
- Runtime/container: Node.js v22
- Model/provider: OpenAI (`openai*` api)
- Integration/channel (if any): N/A
- Relevant config (redacted): `OPENCLAW_OPENAI_PAYLOAD_LOG=true`
### Steps
1. Enable OpenAI payload logging env var.
2. Run an embedded OpenAI attempt.
3. Confirm JSONL includes `request` and `usage` events.
4. Run a non-OpenAI attempt and confirm no OpenAI payload log entries are written.
### Expected
- OpenAI runs log request + usage.
- Non-OpenAI runs do not log.
- Disabled env means no logging.
### Actual
- Matches expected behavior.
## Evidence
Attach at least one:
- [ ] Failing test/log before + passing after
- [x] Trace/log snippets
- [ ] Screenshot/recording
- [ ] Perf numbers (if relevant)
## Human Verification (required)
What you personally verified (not just CI), and how:
- Verified scenarios:
- Reviewed logger behavior and embedded-run wiring end-to-end.
- Confirmed OpenAI-only gating logic and env toggle behavior.
- Edge cases checked:
- Non-OpenAI no-op path.
- Disabled-by-default path.
- What you did **not** verify:
- Full local test execution in this sandbox clone (dependencies not installed in this environment).
## Compatibility / Migration
- Backward compatible? (`Yes`)
- Config/env changes? (`No`, optional new env vars only)
- Migration needed? (`No`)
- If yes, exact upgrade steps:
## Failure Recovery (if this breaks)
- How to disable/revert this change quickly:
- Unset/disable `OPENCLAW_OPENAI_PAYLOAD_LOG`
- Revert commit `bebe46a34050dcd6c30b1c5901e38d9a46882634`
- Files/config to restore:
- `src/agents/openai-payload-log.ts`
- `src/agents/openai-payload-log.test.ts`
- `src/agents/pi-embedded-runner/run/attempt.ts`
- Known bad symptoms reviewers should watch for:
- Unexpected growth of payload log file
- Sensitive prompt content persisted when flag is enabled
## Risks and Mitigations
- Risk: Sensitive prompt content may be written to disk when enabled.
- Mitigation: opt-in only, disabled by default, local path control.
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Added opt-in OpenAI payload logging for embedded agent runs, mirroring the existing Anthropic payload logger pattern. The implementation creates JSONL logs with request payloads (with SHA256 digest) and usage statistics. Logging is disabled by default and activated via `OPENCLAW_OPENAI_PAYLOAD_LOG` environment variable.
**Key changes:**
- New `openai-payload-log.ts` module with logger factory following established patterns
- Integration in `attempt.ts` using `wrapStreamFn` to intercept OpenAI API calls
- Test coverage for both OpenAI and non-OpenAI scenarios
- Uses queued file writer for async write handling
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge with minimal risk
- Implementation follows established patterns from `anthropic-payload-log.ts` exactly, is opt-in by default, includes comprehensive test coverage, and has no impact when disabled. Code is clean, well-structured, and matches existing conventions.
- No files require special attention
<sub>Last reviewed commit: bebe46a</sub>
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#19251: CLI: emit diagnostics for embedded Slack-context runs
by gg2uah · 2026-02-17
75.6%
#20428: feat: capture Anthropic rate-limit response headers to disk
by AndrewArto · 2026-02-18
75.1%
#14274: feat: add collapseReplyPayloads to collapse multi-message replies
by Henry-Roff-AI · 2026-02-11
74.9%
#15253: Adding structured log content
by emailhxn · 2026-02-13
73.8%
#8821: Security: Holistic capability-based sandbox (replaces pattern-match...
by tonioloewald · 2026-02-04
73.7%
#8086: feat(security): Add prompt injection guard rail
by bobbythelobster · 2026-02-03
73.5%
#21055: security(cli): gate systemPromptReport behind --debug flag
by richvincent · 2026-02-19
73.2%
#10006: feat: add append-only spend ledger for token usage tracking
by oldeucryptoboi · 2026-02-06
73.1%
#15756: [Security]: strip provider apiKey from models.json before prompt se...
by SecBear · 2026-02-13
72.8%
#23175: feat(security): runtime safety — transcript retention, tool call bu...
by ihsanmokhlisse · 2026-02-22
72.7%