-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Problem (one or two sentences)
unable to use special model like kimi-k2-thinking, deepseekv3.2 and all other model on nvidial api, allready try to swtich from OpenAI Compatible to any other an still not working when pointing NVDIA API uri with my key
Context (who is affected and when)
any one that what to use nvidia provider
Desired behavior (conceptual, not technical)
include nvidia as supportered provider or made smothing to allow them to use any model format as advanced settings
Constraints / preferences (optional)
No response
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear context and impact
Roo Code Task Links (optional)
No response
Acceptance criteria (optional)
No response
Proposed approach (optional)
include nvidia as supportered provider or made smothing to allow them to use any model format as advanced settings like i'm useing OpenAI Compatible and in advanced setting on bottom of provider setting, add way to manage the current model special format that we use or try to detect , for exemple moonshotai/kimi-k2-thinking is obliviously kimi-k2-thinking schema use, but let user manage in case of bad auto selection
Trade-offs / risks (optional)
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status