#10629: feat(sherpa-onnx-tts): enables sherpa support for rockchip-based CPUs using and .rknn models
stale
Cluster:
Model Management Enhancements
# Summary
Sherpa onnx has native support for running on Rockchip based NPUs when configured. (docs [here](https://k2-fsa.github.io/sherpa/onnx/rknn/index.html?highlight=rockchip))
Official pre-trained models for Rockchip NPUs can be loaded as long as the `--provider` arg is set to `rknn` and the skill logic can load `.rknn` file extensions (docs [here](https://k2-fsa.github.io/sherpa/onnx/rknn/models.html#sherpa-onnx-rk3588-streaming-zipformer-small-bilingual-zh-en-2023-02-16)). This PR enables this skill to run such models.
## Changes
Scans directories for `.rknn` model extensions.
Pass `--provider=rknn` param to cli when an `.rknn` model is found.
<!-- greptile_comment -->
<h2>Greptile Overview</h2>
<h3>Greptile Summary</h3>
- Extend `sherpa-onnx-tts` model auto-detection to include `.rknn` files alongside `.onnx`.
- When an `.rknn` model is selected, attempt to pass a `--provider` argument to `sherpa-onnx-offline-tts`.
- Refactor CLI invocation to build an argument array (`cmd`) before calling `spawnSync`.
- Net effect is intended to enable Sherpa ONNX TTS on Rockchip/RKNN runtimes, but the provider argument construction currently prevents the flag from being applied correctly.
<h3>Confidence Score: 3/5</h3>
- This PR is close to safe to merge, but currently fails to pass the intended `--provider` flag so RKNN support won’t actually work as described.
- Change scope is small and localized to one wrapper script, but there is a definite runtime logic bug in argument construction that breaks the feature’s primary purpose. Once the provider argument is fixed, risk should be low.
- skills/sherpa-onnx-tts/bin/sherpa-onnx-tts
<!-- greptile_other_comments_section -->
<sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub>
<!-- /greptile_comment -->
Most Similar PRs
#3792: add ShengSuanYun (胜算云) as a model provider
by shengsuan · 2026-01-29
66.1%
#7570: fix: allow models from providers with auth profiles configured
by DonSqualo · 2026-02-03
66.0%
#20965: feat: Add comprehensive model configuration and discovery for various…
by rodeok · 2026-02-19
65.5%
#7113: feat(providers): add CommonStack provider support
by flhoildy · 2026-02-02
64.8%
#9123: Feat/smart router backport and custom model provider
by JuliusYang3311 · 2026-02-04
64.6%
#14508: fix(models): allow forward-compat models in allowlist check
by jonisjongithub · 2026-02-12
64.5%
#15991: feat: add Novita AI provider support with dynamic model discovery
by Alex-wuhu · 2026-02-14
64.3%
#5500: Fix #5290 Bedrock Auto Discovery fails to retrieve or support Inferen…
by heqiqi · 2026-01-31
64.3%
#8963: fix(bedrock): fix amazon bedrock model problem of dealing with profile
by 67ailab · 2026-02-04
63.8%
#9822: fix: allow local/custom model providers for sub-agent inference
by stammtobias91 · 2026-02-05
63.5%