#20771: feat(memory-lancedb): support custom OpenAI-compatible embedding providers
extensions: memory-lancedb
size: S
Cluster:
Memory Database Enhancements
## Summary
Adds `baseUrl` and `dimensions` configuration to the `memory-lancedb` plugin, enabling use of **any OpenAI-compatible embedding endpoint** instead of being locked to OpenAI's API.
This allows users to run memory embeddings with:
- **Ollama** (`http://localhost:11434/v1`)
- **LM Studio** (local endpoint)
- **vLLM** / **text-generation-inference**
- **Custom ONNX Runtime servers**
- Any other OpenAI-compatible embedding API
### Changes
**`config.ts`**
- Added `baseUrl?: string` and `dimensions?: number` to `MemoryConfig.embedding` type
- `vectorDimsForModel()` now accepts optional `configDimensions` fallback — unknown models work when dimensions are specified in config
- `parse()` passes through `baseUrl` and `dimensions` to the returned config (previously these were silently dropped)
- Updated `assertAllowedKeys` to accept the new fields
- Added `uiHints` for `baseUrl` and `dimensions` (marked as `advanced`)
**`index.ts`**
- `Embeddings` constructor accepts optional `baseUrl` and passes it to `new OpenAI({ apiKey, baseURL })`
- `register()` passes `cfg.embedding.baseUrl` and `cfg.embedding.dimensions` through to the relevant constructors
**`openclaw.plugin.json`**
- Removed hardcoded `enum` restriction on embedding model — any string is now accepted
- Added `baseUrl` and `dimensions` properties to the JSON schema with descriptions
- Updated `uiHints` labels/help text
**`index.test.ts`**
- Added 3 new tests:
- Custom `baseUrl` + `dimensions` config is accepted
- Known model works without `dimensions`
- Unknown model without `dimensions` throws a helpful error
### Example Configuration
```json5
// Local Ollama embeddings
{
"embedding": {
"apiKey": "ollama",
"model": "nomic-embed-text",
"baseUrl": "http://localhost:11434/v1",
"dimensions": 768
}
}
// Local ONNX Runtime server
{
"embedding": {
"apiKey": "local",
"model": "multilingual-e5-base",
"baseUrl": "http://localhost:11434/v1",
"dimensions": 768
}
}
// OpenAI (unchanged, backwards compatible)
{
"embedding": {
"apiKey": "sk-proj-...",
"model": "text-embedding-3-small"
}
}
```
### Backwards Compatibility
- Fully backwards compatible — existing configs without `baseUrl`/`dimensions` work exactly as before
- Known models (`text-embedding-3-small`, `text-embedding-3-large`) don't need `dimensions`
- `baseUrl` defaults to OpenAI's endpoint when not specified
## Test plan
- [x] All 9 existing tests pass
- [x] 3 new tests added for custom config
- [x] Tested end-to-end with local ONNX Runtime embedding server on CUDA GPU (multilingual-e5-base, 768 dims)
- [ ] Verify with Ollama embedding endpoint
- [ ] Verify with LM Studio
Closes #8118
Closes #17564
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Testing: fully tested (automated + manual verification)
I understand and can explain all changes in this PR.
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Adds support for custom OpenAI-compatible embedding providers (Ollama, LM Studio, vLLM) by introducing `baseUrl` and `dimensions` configuration options. The implementation correctly handles backwards compatibility, validates unknown models require explicit dimensions, and passes configuration through all layers (schema → parser → constructor). Three new tests cover the key scenarios: custom providers with dimensions, known models without dimensions, and proper error handling for unknown models without dimensions.
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge with minimal risk
- The changes are well-structured, maintain backwards compatibility, include comprehensive test coverage (3 new tests for the new functionality), and follow existing code patterns. The implementation correctly validates that unknown models require explicit dimensions, properly passes configuration through all layers, and doesn't introduce breaking changes to existing configurations.
- No files require special attention
<sub>Last reviewed commit: 0f4059b</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#19006: feat(memory-lancedb): OpenAI-compatible baseUrl + Ollama provider +...
by martinsen-assistant · 2026-02-17
90.2%
#17874: feat(memory-lancedb): Custom OpenAI BaseURL & Dimensions Support
by rish2jain · 2026-02-16
89.9%
#17566: memory-lancedb: support local OpenAI-compatible embeddings
by lumenradley · 2026-02-15
89.4%
#17030: feat(memory-lancedb): support Ollama and OpenAI-compatible embeddin...
by nightfullstar · 2026-02-15
87.4%
#10550: feat(memory-lancedb): local embeddings via node-llama-cpp
by namick · 2026-02-06
84.0%
#17701: fix(memory-lancedb): add gemini-embedding-001 and baseUrl support
by Phineas1500 · 2026-02-16
81.3%
#21816: Add configurable `dimensions` for embedding models (Matryoshka supp...
by matthewspear · 2026-02-20
80.0%
#19865: memory: add Ollama embedding provider
by nico-hoff · 2026-02-18
75.7%
#18204: feat(memory): add native Telnyx embedding provider
by aisling404 · 2026-02-16
75.4%
#15639: fix(memory): serialize local embedding initialization to avoid dupl...
by SubtleSpark · 2026-02-13
74.4%