#13079: feat: Add OpenAI-compatible API option to CLI for self-hosted models
cli
commands
stale
Cluster:
Model Authentication Enhancements
This PR adds a new OpenAI-compatible API option to the `openclaw onboard` and `openclaw configure` commands, allowing users to easily configure self-hosted models like LM Studio, Open WebUI, and Ollama.
Key features:
- New "OpenAI-compatible API" option in the auth provider selection
- Interactive prompts for configuring self-hosted model parameters:
* Service base URL (e.g., http://localhost:1234/v1)
* API key (if required)
* Model name
* Provider name
* Reasoning capability (true/false)
* Context window size
* Max tokens
- Prevents model name conflicts by disallowing '/' in model names
- Maintains alphabetical ordering of auth choice groups
- Updates both onboard and configure flows to support the new option
This enhancement makes it much easier for users running self-hosted models to integrate them with OpenClaw without manual JSON configuration.
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
This PR introduces a new auth choice (`self-hosted-openai-api`) and wiring to the onboarding/configure flows so users can configure OpenAI-compatible self-hosted endpoints (base URL, optional API key, provider/model naming, and token limits). It adds the choice to the CLI flag help text, auth choice groups/options, a new interactive apply handler, and a config helper that registers the provider + model in `models.providers` and `agents.defaults.models`.
Main things to double-check before merging are the correctness of the generated provider/model identifiers and that the chosen provider API type matches how OpenClaw talks to OpenAI-compatible services (the repo already distinguishes `openai-completions` vs `openai-responses` for local providers).
<h3>Confidence Score: 3/5</h3>
- This PR is close to mergeable but has a couple of configuration correctness issues that can break self-hosted setups.
- Most changes are additive and follow existing onboarding patterns, but (1) provider names are not validated to keep model refs well-formed and (2) the self-hosted provider config hard-codes an API type that appears inconsistent with existing local provider defaults in this repo, which can lead to misconfigured endpoints at runtime.
- src/commands/auth-choice.apply.self-hosted-openai.ts, src/commands/onboard-auth.config-core.ts
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#3591: CLI: add OpenAI-compatible endpoint auth choice
by surak · 2026-01-28
82.1%
#12059: feat(agents): Add Azure AI Foundry credential support
by lisanyambere · 2026-02-08
81.0%
#16388: Fix: Show model selector during onboarding for all auth choices
by saurav470 · 2026-02-14
79.7%
#7272: Models: add SiliconFlow provider
by qychen2001 · 2026-02-02
79.2%
#10424: feat: Add OVHcloud AI Endpoints as a provider
by eliasto · 2026-02-06
79.0%
#14475: feat: Add native Azure OpenAI support
by dzianisv · 2026-02-12
79.0%
#16033: fix: add model configuration step to onboard command
by MisterGuy420 · 2026-02-14
78.6%
#16099: feat: add opencode-cli as CLI backend provider
by imwxc · 2026-02-14
78.4%
#21884: feat(models): auth improvements — status command, heuristics, multi...
by kckylechen1 · 2026-02-20
78.2%
#7051: Add io-intelligence model inference provider
by rajagurunath · 2026-02-02
77.0%