← Back to PRs

#17030: feat(memory-lancedb): support Ollama and OpenAI-compatible embedding endpoints

by nightfullstar open 2026-02-15 10:12 View on GitHub →
extensions: memory-lancedb stale size: S
## Summary The memory-lancedb plugin previously required an OpenAI API key and only talked to OpenAI’s embeddings API. This PR adds support for **Ollama** and any other service that exposes the OpenAI-compatible `/v1/embeddings` API (e.g. local or self-hosted endpoints), so you can run long-term memory without an OpenAI key. ## Scope - **Config** (`config.ts`): `embedding.apiKey` is now optional when `embedding.baseUrl` is set (e.g. Ollama). Added `embedding.baseUrl` (default `https://api.openai.com/v1`) and `embedding.dimensions` (for models not in the built-in list). Built-in dimensions extended with `nomic-embed-text` (768) and `mxbai-embed-large` (1024). - **Embeddings** (`index.ts`): Replaced the hardcoded `openai` client with a small `fetch`-based client that uses `baseUrl` and only sends `Authorization` when `apiKey` is set. - **Tests** (`index.test.ts`): Updated “missing apiKey” assertion to the new error message; added test that Ollama-style config (baseUrl + model, no apiKey) parses correctly. - **Docs**: Added `OLLAMA-SUPPORT.md` describing the change and how to use the plugin with Ollama. ## User-facing changes - **New config options**: `plugins.entries.memory-lancedb.config.embedding.baseUrl` (e.g. `http://localhost:11434/v1` for Ollama), `embedding.dimensions` (required for unknown models). - **Optional API key**: When `baseUrl` is set to a local/OpenAI-compatible endpoint (e.g. Ollama), `embedding.apiKey` can be omitted. - **Existing OpenAI usage**: Unchanged. Omit `baseUrl` (or leave default) and set `embedding.apiKey` as before. ## Testing - `pnpm test -- extensions/memory-lancedb/index.test.ts` — all tests pass (including new Ollama config test). - Lint/format: `pnpm check` (no new issues). ## Example (Ollama) ```json { "plugins": { "entries": { "memory-lancedb": { "config": { "embedding": { "baseUrl": "http://localhost:11434/v1", "model": "nomic-embed-text" }, "dbPath": "~/.openclaw/memory/lancedb", "autoCapture": true, "autoRecall": true } } }, "slots": { "memory": "memory-lancedb" } } } ``` No `apiKey` required; run `ollama pull nomic-embed-text` and ensure Ollama is serving. <!-- greptile_comment --> <h3>Greptile Summary</h3> Adds support for Ollama and OpenAI-compatible embedding endpoints to the memory-lancedb plugin, allowing users to run long-term memory without an OpenAI API key. The implementation replaces the hardcoded OpenAI client with a generic `fetch`-based client that conditionally sends authorization headers when an API key is provided. **Changes made:** - `config.ts`: Made `embedding.apiKey` optional when `embedding.baseUrl` is set, added `embedding.baseUrl` and `embedding.dimensions` config options, extended built-in dimension mappings for Ollama models (`nomic-embed-text`, `mxbai-embed-large`) - `index.ts`: Replaced OpenAI SDK with lightweight `fetch` implementation that works with any OpenAI-compatible `/v1/embeddings` endpoint - `index.test.ts`: Updated test assertions to match new error messages and added validation for Ollama configuration **Issues found:** - The `openai` package dependency is still present in `package.json` but is no longer imported or used in the code <h3>Confidence Score: 4/5</h3> - Safe to merge with minor cleanup recommended - The implementation is solid with proper validation, security considerations (UUID validation, prompt injection filtering), and comprehensive test coverage. The fetch-based approach correctly handles OpenAI-compatible endpoints. One non-critical improvement: removing the unused `openai` dependency from package.json. - Check `extensions/memory-lancedb/package.json` to remove unused dependency <sub>Last reviewed commit: 8baa707</sub> <!-- greptile_other_comments_section --> <!-- /greptile_comment -->

Most Similar PRs