#19865: memory: add Ollama embedding provider
agents
size: S
Cluster:
Memory Database Enhancements
## Summary
Adds an Ollama embeddings provider for memory search.
## Changes
- New provider: `ollama` for `agents.defaults.memorySearch.provider`
- Calls `POST /api/embeddings` on Ollama (default `http://127.0.0.1:11434`)
- Default model: `nomic-embed-text`
- Config schema updated (types + zod)
- Unit test added for request + normalization
## Verification
- Local run with `provider=ollama`: memory indexing succeeds
- Embedding dimension observed: 768 (as expected for `nomic-embed-text`)
## Notes
This enables running embeddings fully locally when Ollama is available.
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Adds Ollama as a new embedding provider for memory search, enabling local-first embedding via Ollama's HTTP API (default: `nomic-embed-text` on `http://127.0.0.1:11434`). The implementation follows the existing provider pattern (provider + client, normalization, error handling) and updates all config surfaces (types, zod schema, help text).
- **Auto-selection issue**: Adding Ollama to `REMOTE_EMBEDDING_PROVIDER_IDS` causes it to be silently selected in `auto` mode when no other API keys are configured, since its creation never fails. This will produce confusing network errors if Ollama isn't running, instead of the previous clear "no API key" messages.
- Uses the older `/api/embeddings` endpoint instead of the newer `/api/embed` which supports native batching.
- Test mock cleanup doesn't follow established patterns from the existing Voyage test.
<h3>Confidence Score: 2/5</h3>
- The auto-selection behavior change could silently break the default experience for users without Ollama installed.
- The core Ollama provider implementation is sound, but adding it to REMOTE_EMBEDDING_PROVIDER_IDS changes auto-selection behavior in a way that could degrade the user experience for anyone who doesn't have Ollama running. This is a silent behavioral regression that affects the default `auto` provider mode.
- Pay close attention to `src/memory/embeddings.ts` (auto-selection logic) and `src/memory/embeddings-ollama.ts` (API endpoint choice).
<sub>Last reviewed commit: 260977c</sub>
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#17030: feat(memory-lancedb): support Ollama and OpenAI-compatible embeddin...
by nightfullstar · 2026-02-15
82.8%
#19006: feat(memory-lancedb): OpenAI-compatible baseUrl + Ollama provider +...
by martinsen-assistant · 2026-02-17
78.1%
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
77.3%
#19945: memory: gracefully disable hybrid keyword search when fts5 unavailable
by nico-hoff · 2026-02-18
76.6%
#20771: feat(memory-lancedb): support custom OpenAI-compatible embedding pr...
by marcodelpin · 2026-02-19
75.7%
#4782: fix: Auto-discover Ollama models without requiring explicit API key
by spiceoogway · 2026-01-30
74.1%
#10550: feat(memory-lancedb): local embeddings via node-llama-cpp
by namick · 2026-02-06
72.8%
#7432: Comprehensive Ollama Support PR
by charlieduzstuf · 2026-02-02
71.2%
#20191: feat(memory): add Amazon Bedrock embedding provider (Nova 2)
by gabrielkoo · 2026-02-18
71.0%
#21620: Add DeepSeek embeddings provider for memory search
by YoungjuneKwon · 2026-02-20
70.7%