#17030: feat(memory-lancedb): support Ollama and OpenAI-compatible embedding endpoints
extensions: memory-lancedb
stale
size: S
Cluster:
Memory Database Enhancements
## Summary
The memory-lancedb plugin previously required an OpenAI API key and only talked to OpenAI’s embeddings API. This PR adds support for **Ollama** and any other service that exposes the OpenAI-compatible `/v1/embeddings` API (e.g. local or self-hosted endpoints), so you can run long-term memory without an OpenAI key.
## Scope
- **Config** (`config.ts`): `embedding.apiKey` is now optional when `embedding.baseUrl` is set (e.g. Ollama). Added `embedding.baseUrl` (default `https://api.openai.com/v1`) and `embedding.dimensions` (for models not in the built-in list). Built-in dimensions extended with `nomic-embed-text` (768) and `mxbai-embed-large` (1024).
- **Embeddings** (`index.ts`): Replaced the hardcoded `openai` client with a small `fetch`-based client that uses `baseUrl` and only sends `Authorization` when `apiKey` is set.
- **Tests** (`index.test.ts`): Updated “missing apiKey” assertion to the new error message; added test that Ollama-style config (baseUrl + model, no apiKey) parses correctly.
- **Docs**: Added `OLLAMA-SUPPORT.md` describing the change and how to use the plugin with Ollama.
## User-facing changes
- **New config options**: `plugins.entries.memory-lancedb.config.embedding.baseUrl` (e.g. `http://localhost:11434/v1` for Ollama), `embedding.dimensions` (required for unknown models).
- **Optional API key**: When `baseUrl` is set to a local/OpenAI-compatible endpoint (e.g. Ollama), `embedding.apiKey` can be omitted.
- **Existing OpenAI usage**: Unchanged. Omit `baseUrl` (or leave default) and set `embedding.apiKey` as before.
## Testing
- `pnpm test -- extensions/memory-lancedb/index.test.ts` — all tests pass (including new Ollama config test).
- Lint/format: `pnpm check` (no new issues).
## Example (Ollama)
```json
{
"plugins": {
"entries": {
"memory-lancedb": {
"config": {
"embedding": {
"baseUrl": "http://localhost:11434/v1",
"model": "nomic-embed-text"
},
"dbPath": "~/.openclaw/memory/lancedb",
"autoCapture": true,
"autoRecall": true
}
}
},
"slots": { "memory": "memory-lancedb" }
}
}
```
No `apiKey` required; run `ollama pull nomic-embed-text` and ensure Ollama is serving.
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Adds support for Ollama and OpenAI-compatible embedding endpoints to the memory-lancedb plugin, allowing users to run long-term memory without an OpenAI API key. The implementation replaces the hardcoded OpenAI client with a generic `fetch`-based client that conditionally sends authorization headers when an API key is provided.
**Changes made:**
- `config.ts`: Made `embedding.apiKey` optional when `embedding.baseUrl` is set, added `embedding.baseUrl` and `embedding.dimensions` config options, extended built-in dimension mappings for Ollama models (`nomic-embed-text`, `mxbai-embed-large`)
- `index.ts`: Replaced OpenAI SDK with lightweight `fetch` implementation that works with any OpenAI-compatible `/v1/embeddings` endpoint
- `index.test.ts`: Updated test assertions to match new error messages and added validation for Ollama configuration
**Issues found:**
- The `openai` package dependency is still present in `package.json` but is no longer imported or used in the code
<h3>Confidence Score: 4/5</h3>
- Safe to merge with minor cleanup recommended
- The implementation is solid with proper validation, security considerations (UUID validation, prompt injection filtering), and comprehensive test coverage. The fetch-based approach correctly handles OpenAI-compatible endpoints. One non-critical improvement: removing the unused `openai` dependency from package.json.
- Check `extensions/memory-lancedb/package.json` to remove unused dependency
<sub>Last reviewed commit: 8baa707</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#19006: feat(memory-lancedb): OpenAI-compatible baseUrl + Ollama provider +...
by martinsen-assistant · 2026-02-17
89.2%
#20771: feat(memory-lancedb): support custom OpenAI-compatible embedding pr...
by marcodelpin · 2026-02-19
87.4%
#19865: memory: add Ollama embedding provider
by nico-hoff · 2026-02-18
82.8%
#17874: feat(memory-lancedb): Custom OpenAI BaseURL & Dimensions Support
by rish2jain · 2026-02-16
82.8%
#17566: memory-lancedb: support local OpenAI-compatible embeddings
by lumenradley · 2026-02-15
82.4%
#10550: feat(memory-lancedb): local embeddings via node-llama-cpp
by namick · 2026-02-06
81.4%
#7278: feat(ollama): optimize local LLM support with auto-discovery and ti...
by alltomatos · 2026-02-02
75.1%
#11877: feat(ollama): auto-detect vision capability via /api/show
by Nina-VanKhan · 2026-02-08
74.0%
#17701: fix(memory-lancedb): add gemini-embedding-001 and baseUrl support
by Phineas1500 · 2026-02-16
72.7%
#16098: fix: omit tools param for models without tool support, surface erro...
by claw-sylphx · 2026-02-14
72.4%