#11226: Fix system property assignment in cache-trace.ts
agents
stale
size: XS
There is a discrepancy between "session:loaded" and "stream:context" that makes it appear as though no context is sent to the LLM
```json
{
"stage": "session:loaded",
"ts": "2026-02-07T13:08:09.286Z",
"provider": "gamecuboid",
"modelApi": "openai-responses",
"modelId": "gpt-oss:esther",
"systemType": "string",
"systemLen": 36436,
"hasProject": true,
"hasSoul": true,
"systemHead": "You are a personal assistant running inside OpenClaw.\n## Tooling\nTool availability (filtered by policy):\nTool names are case-sensitive. Call tools exactly as li"
}
{
"stage": "stream:context",
"ts": "2026-02-07T13:08:09.333Z",
"provider": "gamecuboid",
"modelApi": "openai-responses",
"modelId": "gpt-oss:esther",
"systemType": "null",
"systemLen": 0,
"hasProject": false,
"hasSoul": false,
"systemHead": ""
}
```
This held true with both responses and completions APIs
After changing to context.systemPrompt, the output matches along both steps
```json
{
"stage": "session:loaded",
"ts": "2026-02-07T15:02:10.242Z",
"provider": "gamecuboid",
"modelApi": "openai-responses",
"modelId": "gpt-oss:esther",
"systemType": "string",
"systemLen": 36436,
"hasProject": true,
"hasSoul": true,
"systemHead": "You are a personal assistant running inside OpenClaw.\n## Tooling\nTool availability (filtered by policy):\nTool names are case-sensitive. Call tools exactly as li"
}
{
"stage": "stream:context",
"ts": "2026-02-07T15:02:10.251Z",
"provider": "gamecuboid",
"modelApi": "openai-responses",
"modelId": "gpt-oss:esther",
"systemType": "string",
"systemLen": 36436,
"hasProject": true,
"hasSoul": true,
"systemHead": "You are a personal assistant running inside OpenClaw.\n## Tooling\nTool availability (filtered by policy):\nTool names are case-sensitive. Call tools exactly as li"
}
```
This should make debugging with the cache-trace more accurate
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This change updates cache trace logging for the `stream:context` stage to read the system prompt from `context.systemPrompt` (falling back to `context.system`) so the recorded `system` value matches what was present when the session was loaded, improving trace accuracy when debugging LLM context issues.
<h3>Confidence Score: 2/5</h3>
- This PR should not be merged as-is because it introduces a syntax error that will break builds.
- The core functional intent is small and localized, but `cache-trace.ts` now contains an extra comma in an object literal, which is a guaranteed compile/runtime failure until fixed.
- src/agents/cache-trace.ts
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#22387: fix: session_status context tracking undercount for cached providers
by 1ucian · 2026-02-21
76.0%
#18997: fix: improve context overflow error messages and docs
by realhoratiobot · 2026-02-17
74.9%
#22220: feat(bootstrap): cache session's bootstrap files so we don't invali...
by anisoptera · 2026-02-20
74.0%
#15896: fix(memory-lancedb): capture even with injected recall context
by aelaguiz · 2026-02-14
74.0%
#4999: fix(memory-flush): use contextTokens instead of totalTokens for thr...
by Farfadium · 2026-01-30
73.9%
#12974: fix: intermittent (no output) reported by users
by vincentkoc · 2026-02-10
73.7%
#23736: fix(system-prompt): improve prompt cache locality with unique agent ID
by mrx-arafat · 2026-02-22
73.4%
#8919: Pr/memory flush improvements
by shortbus · 2026-02-04
73.3%
#19412: fix(status): prefer configured contextTokens over session entry
by rafaelipuente · 2026-02-17
73.1%
#23720: Feat/cli backend runtime tuning
by wanmorebot · 2026-02-22
73.0%