#20476: Providers: add DigitalOcean Gradient AI inference endpoint
commands
agents
size: M
Cluster:
Provider Support Enhancements
#### Summary
Adds DigitalOcean Gradient AI inference endpoint as a default provider option in the onboarding flow. Users can now select DigitalOcean Gradient when configuring OpenClaw for the first time, with support for multiple models including Llama 3.3 70B Instruct, GPT OSS 120B, and DeepSeek R1 Distill Llama 70B.
lobster-biscuit
#### Behavior Changes
- New provider group "DigitalOcean Gradient" appears in onboarding auth choice prompts
- New `--digitalocean-api-key` flag available for non-interactive onboarding
- `DIGITALOCEAN_API_KEY` environment variable is now recognized
- Default model: `digitalocean/llama3.3-70b-instruct`
- API endpoint: `https://inference.do-ai.run/v1` (OpenAI-compatible)
- Three model options supported:
- Llama 3.3 70B Instruct (default)
- GPT OSS 120B
- DeepSeek R1 Distill Llama 70B (reasoning model)
#### Codebase and GitHub Search
- [x] I searched the codebase for existing functionality.
Searches performed:
- Searched for similar provider integration patterns (xAI, Moonshot, Venice, Qwen)
- Verified no existing DigitalOcean Gradient integration
- Checked onboarding flow structure and auth choice patterns
#### Tests
- Manual testing performed (see below)
- No new automated tests added (follows existing pattern for provider additions)
- Integration follows established patterns used by other API-key-based providers
#### Manual Testing
### Prerequisites
- DigitalOcean Gradient API key (from DigitalOcean console)
- OpenClaw development environment
### Steps
1. Run `pnpm openclaw onboard` and select "DigitalOcean Gradient" from provider list
2. Enter API key when prompted
3. Verify model configuration in `~/.openclaw/openclaw.json`
4. Test inference with `openclaw chat` or gateway
5. Verify non-interactive onboarding: `pnpm openclaw onboard --digitalocean-api-key=<key>`
6. Test environment variable: `DIGITALOCEAN_API_KEY=<key> pnpm openclaw onboard`
**Sign-Off**
- Models used: Claude Sonnet 4.5 (via GitHub Copilot)
- Submitter effort (self-reported): ~3 hours (research, implementation, testing, refinement)
- Agent notes: Implementation follows established patterns for OpenAI-compatible providers; added comprehensive model catalog with reasoning capability flags; removed anthropic-claude model that was routing incorrectly.
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
Adds DigitalOcean Gradient AI as a provider option, following established patterns for OpenAI-compatible providers. The implementation includes:
- Three model options (Llama 3.3 70B, GPT OSS 120B, DeepSeek R1 Distill Llama 70B)
- API key authentication via flag, environment variable, or interactive prompt
- Integration with both interactive and non-interactive onboarding flows
- Proper configuration for OpenAI-compatible inference endpoint (`https://inference.do-ai.run/v1`)
The changes are consistent with similar providers (xAI, Venice, Moonshot) and integrate cleanly across all necessary touchpoints: auth choices, credential storage, model definitions, and CLI flags.
<h3>Confidence Score: 5/5</h3>
- Safe to merge - implementation follows established provider patterns with no logical errors
- The implementation closely follows existing patterns for OpenAI-compatible providers (Venice, xAI), includes all necessary integration points (auth options, credentials, models, flags, types), and has been manually tested by the author. The code is well-structured and consistent with the codebase conventions.
- No files require special attention
<sub>Last reviewed commit: cecd27a</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Most Similar PRs
#15742: feat: add Edgee AI Gateway as provider
by manthis · 2026-02-13
79.4%
#10424: feat: Add OVHcloud AI Endpoints as a provider
by eliasto · 2026-02-06
78.1%
#7051: Add io-intelligence model inference provider
by rajagurunath · 2026-02-02
77.2%
#7113: feat(providers): add CommonStack provider support
by flhoildy · 2026-02-02
76.9%
#7272: Models: add SiliconFlow provider
by qychen2001 · 2026-02-02
76.6%
#15991: feat: add Novita AI provider support with dynamic model discovery
by Alex-wuhu · 2026-02-14
76.3%
#2429: added cerebras as a model provider.
by kkkamur07 · 2026-01-26
76.1%
#9821: add DeepSeek API provider support
by avibrahms · 2026-02-05
75.9%
#7418: feat (amazon-nova): add Amazon Nova 1P provider
by 5herlocked · 2026-02-02
75.3%
#13079: feat: Add OpenAI-compatible API option to CLI for self-hosted models
by MikeWang0316tw · 2026-02-10
75.0%