← Back to PRs

#19298: feat(tools): add Brave LLM Context API mode for web_search

by RoccoFortuna open 2026-02-17 17:01 View on GitHub →
agents size: L
> **Note:** Replaces #16312 which was closed due to a branch push issue (OAuth scope for workflow files). Same changes, clean branch. ## Summary - **Problem:** Brave offers an LLM Context API (`/res/v1/llm/context`) that returns pre-extracted, relevance-scored web content optimized for LLM consumption, but OpenClaw only supports the standard web search endpoint. - **Why it matters:** The LLM Context API returns full text snippets, tables, and code blocks instead of just titles/descriptions, which is significantly better for agent grounding. - **What changed:** Added `brave.mode` and `brave.llmContext` config, a new `runBraveLlmContextSearch()` function, and llm-context branches in `runWebSearch()`/`createWebSearchTool()`. - **What did NOT change:** Standard Brave web search, Perplexity, and Grok providers are untouched. No `brave` config block = existing behavior, zero breaking changes. AI-assisted (Claude Code). Fully tested locally (unit, integration, and live API). I understand what the code does. This is my first PR on the repo - any feedback on code style, structure, or approach is very welcome. Happy to iterate! ## Change Type (select all) - [ ] Bug fix - [x] Feature - [ ] Refactor - [ ] Docs - [ ] Security hardening - [ ] Chore/infra ## Scope (select all touched areas) - [ ] Gateway / orchestration - [x] Skills / tool execution - [ ] Auth / tokens - [ ] Memory / storage - [ ] Integrations - [ ] API / contracts - [ ] UI / DX - [ ] CI/CD / infra ## Linked Issue/PR - Closes #14992 ## User-visible / Behavior Changes - New config keys: `tools.web.search.brave.mode` (`"web"` | `"llm-context"`) and `tools.web.search.brave.llmContext.*` (maxTokens, maxUrls, thresholdMode, maxSnippets, maxTokensPerUrl, maxSnippetsPerUrl). - `freshness` parameter returns an error when used in llm-context mode (unsupported by the endpoint). - Result format in llm-context mode includes `content` (joined snippets) instead of `description`, plus `mode`, `sourceCount` fields. - Switching modes requires a gateway restart (consistent with how all provider configs are resolved). ## Security Impact (required) - New permissions/capabilities? `No` - Secrets/tokens handling changed? `No` - reuses existing `BRAVE_API_KEY` / `tools.web.search.apiKey` - New/changed network calls? `Yes` - new GET requests to `https://api.search.brave.com/res/v1/llm/context` - Command/tool execution surface changed? `No` - same `web_search` tool, same parameters - Data access scope changed? `No` - **Risk + mitigation:** The new endpoint is on the same Brave API domain, uses the same auth header (`X-Subscription-Token`), and all response content is wrapped with `wrapWebContent()` (matching the existing security pattern for titles and snippet content). ## Repro + Verification ### Environment - OS: macOS - Runtime: Node 20 (pnpm) - Config: `tools.web.search.provider: "brave"` with `brave.mode: "llm-context"` ### Steps 1. Set `tools.web.search.brave.mode: "llm-context"` in config 2. Invoke `web_search` tool with a query 3. Observe response contains `mode: "llm-context"`, `content` fields with joined snippets ### Expected - LLM Context API is called, results contain full text snippets ### Actual - Matches expected ## Evidence - [x] Failing test/log before + passing after - [x] Trace/log snippets - [x] Live API verification ``` pnpm vitest run --config vitest.e2e.config.ts src/agents/tools/web-search.e2e.test.ts ✓ src/agents/tools/web-search.e2e.test.ts (37 tests) 7ms Test Files 1 passed (1) Tests 37 passed (37) pnpm vitest run --config vitest.e2e.config.ts src/agents/tools/web-tools.enabled-defaults.e2e.test.ts ✓ src/agents/tools/web-tools.enabled-defaults.e2e.test.ts (27 tests) 13ms Test Files 1 passed (1) Tests 27 passed (27) pnpm vitest run src/config/config-misc.test.ts ✓ src/config/config-misc.test.ts (31 tests) 22ms Test Files 1 passed (1) Tests 31 passed (31) pnpm check (format + tsgo + lint) - all pass (tsgo error in discord/monitor/gateway-plugin.ts is pre-existing) ``` Live API tested with Brave Search subscription key: - Web mode: standard snippets, freshness accepted - LLM-context mode: full extracted content, freshness correctly rejected - llmContext.maxTokens tuning: visibly shorter output with lower values ## Human Verification (required) - **Verified scenarios:** Config parsing with valid/invalid brave config, resolver functions with undefined/empty/full config, cache key differentiation between web and llm-context modes, live API calls in both modes. - **Edge cases checked:** Missing brave config block (defaults to web), freshness rejection in llm-context mode, strict mode Zod validation rejecting unknown keys, maxTokens range validation (below min / above max), mode switching via config + restart. - **What I did NOT verify:** All permutations of llmContext sub-params against live API (only tested maxTokens). ## Compatibility / Migration - Backward compatible? `Yes` - Config/env changes? `Yes` - new optional config keys under `tools.web.search.brave.*` - Migration needed? `No` - no config = existing behavior unchanged ## Failure Recovery (if this breaks) - **How to disable/revert:** Remove `tools.web.search.brave` config block, or set `brave.mode: "web"` to revert to standard Brave web search. - **Files/config to restore:** Only user config needs changing. - **Bad symptoms:** If the LLM Context API returns unexpected response shapes, the results array will be empty (graceful degradation via optional chaining and fallback defaults). ## Risks and Mitigations - **Risk:** Brave LLM Context API response format changes in the future. - **Mitigation:** Response parsing uses defensive optional chaining and defaults; empty/missing fields produce empty results rather than errors. ## Configuration ```yaml tools: web: search: provider: brave brave: mode: llm-context # or "web" (the default when omitted) llmContext: maxTokens: 16384 # 1024-32768, default 8192 maxUrls: 10 # 1-50, default 20 thresholdMode: strict # strict | balanced | lenient | disabled maxSnippets: 50 # 1-100 maxTokensPerUrl: 4096 # 512-8192 maxSnippetsPerUrl: 5 # 1-100 ``` <!-- greptile_comment --> <h3>Greptile Summary</h3> Adds Brave LLM Context API mode to `web_search` tool, providing pre-extracted LLM-optimized content alongside the existing standard web search. Implementation follows established patterns for provider-specific config (similar to Perplexity/Grok), properly validates config with Zod schemas, and includes comprehensive test coverage (unit, integration, and config validation tests). **Key changes:** - New `brave.mode` config (`"web"` | `"llm-context"`) with backward compatibility (defaults to `"web"`) - `runBraveLlmContextSearch()` function mirrors standard Brave search structure with proper error handling and content wrapping - Cache keys differentiate between web and llm-context modes to prevent cross-mode pollution - Freshness parameter correctly rejected for llm-context mode (API doesn't support it) - All external content wrapped with `wrapWebContent()` for security consistency - Full test coverage including edge cases (missing config, strict schema validation, API response parsing) <h3>Confidence Score: 5/5</h3> - This PR is safe to merge with minimal risk - it's a backward-compatible feature addition with comprehensive test coverage and follows all existing patterns. - Score reflects thorough implementation: follows existing provider config patterns (Perplexity/Grok), includes unit/integration/config tests covering edge cases, uses proper error handling with `readResponseText`, validates inputs with Zod schemas (strict mode), wraps all external content for security, maintains backward compatibility (no config = existing behavior), and differentiates cache keys between modes to prevent pollution. - No files require special attention - all changes follow established patterns and include appropriate test coverage. <sub>Last reviewed commit: 0cf4da6</sub> <!-- greptile_other_comments_section --> <sub>(3/5) Reply to the agent's comments like "Can you suggest a fix for this @greptileai?" or ask follow-up questions!</sub> <!-- /greptile_comment -->

Most Similar PRs