-
-
Notifications
You must be signed in to change notification settings - Fork 73
feat: Introduce OpenRouter adapter #123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
| Command | Status | Duration | Result |
|---|---|---|---|
nx affected --targets=test:sherif,test:knip,tes... |
❌ Failed | 10s | View ↗ |
nx run-many --targets=build --exclude=examples/** |
✅ Succeeded | 4s | View ↗ |
☁️ Nx Cloud last updated this comment at 2025-12-08 19:25:03 UTC
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-gemini
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
|
would like to see this adapter for python 😄 |
Once the Typescript adapter is finalized and reviewed, I’ll definitely work on adding a Python version as well |
WalkthroughThis PR introduces a complete OpenRouter adapter package for TanStack AI, featuring streaming chat completions, tool integration, multimodal content support, web search, image generation, and comprehensive type definitions with full test coverage and documentation. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Hold off until adding changes, we have implemented huge breaking changes for adapters internally |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (9)
packages/typescript/ai-openrouter/live-tests/package.json (1)
1-20: Nice separation for “live” tests; consider making env requirements explicit.These scripts likely require
OPENROUTER_API_KEY(and possibly other vars). Consider adding a short README inlive-tests/(or apretestscript) to fail fast with a clear error if env vars are missing.packages/typescript/ai-openrouter/src/tools/function-tool.ts (1)
13-26: Consider removing the redundant fallback.The
|| {}on Line 23 is redundant sincejsonSchemaalready defaults to{}on Line 16.Apply this diff:
return { type: 'function', function: { name: tool.name, description: tool.description, - parameters: jsonSchema || {}, + parameters: jsonSchema, }, }packages/typescript/ai-openrouter/live-tests/tool-test.ts (1)
7-16: Consider logging environment file errors.The empty catch block silently ignores all errors when reading
.env.local. While this allows the file to be optional, it could hide permission errors or other unexpected issues.Consider logging a debug message:
const envContent = readFileSync(join(__dirname, '.env.local'), 'utf-8') envContent.split('\n').forEach((line) => { const match = line.match(/^([^=]+)=(.*)$/) if (match) { process.env[match[1].trim()] = match[2].trim() } }) -} catch (e) {} +} catch (e) { + // .env.local is optional +}packages/typescript/ai-openrouter/live-tests/web-search-test.ts (1)
6-15: Consider adding a comment for the empty catch block.Similar to tool-test.ts, the empty catch block silently ignores errors. Consider adding a comment to clarify this is intentional.
} }) -} catch {} +} catch { + // .env.local is optional +}packages/typescript/ai-openrouter/live-tests/image-test.ts (1)
6-15: Consider adding a comment for the empty catch block.Consistent with other live tests, consider clarifying the intentional empty catch block.
} }) -} catch {} +} catch { + // .env.local is optional +}packages/typescript/ai-openrouter/src/openrouter-adapter.ts (3)
162-167: Consider usingdeclarefor type-only properties instead of@ts-ignore.The
@ts-ignorecomments suppress TypeScript errors for unassigned properties used only for type inference. Usingdeclaremodifier would be more idiomatic:- // @ts-ignore - We never assign this at runtime and it's only used for types - _modelProviderOptionsByName: OpenRouterChatModelProviderOptionsByName - // @ts-ignore - We never assign this at runtime and it's only used for types - _modelInputModalitiesByName?: OpenRouterModelInputModalitiesByName - // @ts-ignore - We never assign this at runtime and it's only used for types - _messageMetadataByModality?: OpenRouterMessageMetadataByModality + declare _modelProviderOptionsByName: OpenRouterChatModelProviderOptionsByName + declare _modelInputModalitiesByName?: OpenRouterModelInputModalitiesByName + declare _messageMetadataByModality?: OpenRouterMessageMetadataByModality
256-287: Consider adding abort signal support for consistency withchatStream.The
chatStreammethod supportsoptions.request?.signalfor request cancellation, butsummarizedoes not accept any abort mechanism. For long-running summarization requests, users may want the ability to cancel.
376-380: Silent JSON parse failure could hide malformed SSE data.The
catchblock silently continues when JSON parsing fails. While this provides resilience against malformed data, consider logging a warning in debug mode to help diagnose issues.packages/typescript/ai-openrouter/src/text/text-provider-options.ts (1)
309-321: Duplicatetool_choicedefinition.The
tool_choiceproperty is defined identically in bothOpenRouterBaseOptions(lines 245-254) andInternalTextProviderOptions(lines 312-321). SinceInternalTextProviderOptionsextendsExternalTextProviderOptions(which aliasesOpenRouterBaseOptions), this redefinition is redundant.Remove the duplicate definition:
tools?: Array<{ type: 'function' function: { name: string description?: string parameters: Record<string, unknown> } }> - /** - * Controls which (if any) tool the model should use. - */ - tool_choice?: - | 'none' - | 'auto' - | 'required' - | { - type: 'function' - function: { - name: string - } - } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (19)
docs/adapters/openrouter.md(1 hunks)docs/config.json(1 hunks)packages/typescript/ai-openrouter/README.md(2 hunks)packages/typescript/ai-openrouter/live-tests/image-test.ts(1 hunks)packages/typescript/ai-openrouter/live-tests/package.json(1 hunks)packages/typescript/ai-openrouter/live-tests/tool-test.ts(1 hunks)packages/typescript/ai-openrouter/live-tests/web-search-test.ts(1 hunks)packages/typescript/ai-openrouter/package.json(1 hunks)packages/typescript/ai-openrouter/src/index.ts(1 hunks)packages/typescript/ai-openrouter/src/message-types.ts(1 hunks)packages/typescript/ai-openrouter/src/model-meta.ts(1 hunks)packages/typescript/ai-openrouter/src/openrouter-adapter.ts(1 hunks)packages/typescript/ai-openrouter/src/text/text-provider-options.ts(1 hunks)packages/typescript/ai-openrouter/src/tools/function-tool.ts(1 hunks)packages/typescript/ai-openrouter/src/tools/index.ts(1 hunks)packages/typescript/ai-openrouter/src/tools/tool-converter.ts(1 hunks)packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts(1 hunks)packages/typescript/ai-openrouter/tsconfig.json(1 hunks)packages/typescript/ai-openrouter/vite.config.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
packages/typescript/ai-openrouter/src/message-types.ts (1)
packages/typescript/ai-openrouter/src/index.ts (6)
OpenRouterTextMetadata(12-12)OpenRouterImageMetadata(13-13)OpenRouterAudioMetadata(14-14)OpenRouterVideoMetadata(15-15)OpenRouterDocumentMetadata(16-16)OpenRouterMessageMetadataByModality(17-17)
packages/typescript/ai-openrouter/src/text/text-provider-options.ts (1)
packages/typescript/ai-openrouter/src/index.ts (5)
WebPlugin(20-20)ProviderPreferences(21-21)ReasoningOptions(22-22)StreamOptions(23-23)ImageConfig(24-24)
packages/typescript/ai-openrouter/live-tests/image-test.ts (3)
packages/typescript/ai/src/stream/processor.ts (1)
process(390-413)packages/typescript/ai-openrouter/src/index.ts (1)
createOpenRouter(3-3)packages/typescript/ai-openrouter/src/openrouter-adapter.ts (1)
createOpenRouter(674-679)
packages/typescript/ai-openrouter/live-tests/tool-test.ts (3)
packages/typescript/ai/src/stream/processor.ts (1)
process(390-413)packages/typescript/ai-openrouter/src/index.ts (1)
createOpenRouter(3-3)packages/typescript/ai-openrouter/src/openrouter-adapter.ts (1)
createOpenRouter(674-679)
packages/typescript/ai-openrouter/src/openrouter-adapter.ts (4)
packages/typescript/ai-openrouter/src/text/text-provider-options.ts (2)
ExternalTextProviderOptions(271-271)InternalTextProviderOptions(273-322)packages/typescript/ai-openrouter/src/tools/tool-converter.ts (2)
OpenRouterTool(5-5)convertToolsToProviderFormat(7-11)packages/typescript/ai-openrouter/src/model-meta.ts (2)
OpenRouterChatModelProviderOptionsByName(5-7)OpenRouterModelInputModalitiesByName(9-11)packages/typescript/ai-openrouter/src/message-types.ts (2)
OpenRouterMessageMetadataByModality(13-19)OpenRouterImageMetadata(3-5)
packages/typescript/ai-openrouter/src/model-meta.ts (2)
packages/typescript/ai-openrouter/src/index.ts (2)
OpenRouterChatModelProviderOptionsByName(8-8)OpenRouterModelInputModalitiesByName(9-9)packages/typescript/ai-openrouter/src/text/text-provider-options.ts (1)
OpenRouterBaseOptions(110-269)
🪛 Biome (2.1.2)
packages/typescript/ai-openrouter/src/message-types.ts
[error] 1-1: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
[error] 7-7: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
[error] 9-9: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
[error] 11-11: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
🪛 markdownlint-cli2 (0.18.1)
packages/typescript/ai-openrouter/README.md
47-47: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
🔇 Additional comments (31)
docs/config.json (1)
97-120: Confirm docs route matches the OpenRouter doc page id/slug.
to: "adapters/openrouter"assumes the OpenRouter docs page resolves to that route, butdocs/adapters/openrouter.mdusesid: openrouter-adapter(which some doc systems treat as the slug). Please verify the link works in the built docs and align either thetoor the doc frontmatter id if needed.packages/typescript/ai-openrouter/README.md (1)
40-46: Copy update looks fine.docs/adapters/openrouter.md (1)
1-57: Doc slug/id vs sidebar route + model examples need verification.
- Frontmatter
id: openrouter-adaptermay not match the sidebar route (adapters/openrouter). If your doc system usesidas the slug, consider changing it toopenrouter(or update the sidebar).- The “Available Models” examples look like illustrative model names; please verify they’re valid on OpenRouter (or label them clearly as examples).
packages/typescript/ai-openrouter/vite.config.ts (1)
1-36: Looks reasonable; please verify JSON import works in your Node/Vite setup.
import packageJson from './package.json'can be sensitive to Node ESM + TS + tooling config (JSON modules / import assertions). Since this is config-time code, please confirmpnpm -C packages/typescript/ai-openrouter test:libworks in CI and locally.packages/typescript/ai-openrouter/tsconfig.json (1)
1-9: LGTM for a package-local tsconfig.packages/typescript/ai-openrouter/src/tools/index.ts (1)
1-5: LGTM!The barrel exports are clean and follow standard TypeScript module patterns.
packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts (7)
1-14: LGTM!Test setup and imports are well-structured.
16-35: LGTM!The SSE mock helper correctly simulates the streaming response format.
42-141: LGTM!Comprehensive test coverage for API payload mapping. The test validates all critical fields including provider-specific options like plugins.
143-223: LGTM!Excellent test for streaming behavior, validating incremental content delivery and usage reporting.
225-311: LGTM!Well-designed test for streaming tool calls with proper validation of incremental argument accumulation.
313-366: LGTM!Multimodal test properly validates the transformation of text and image content parts.
368-397: LGTM!Error handling test correctly validates error chunk generation on HTTP errors.
packages/typescript/ai-openrouter/src/tools/tool-converter.ts (1)
1-11: LGTM!Clean converter implementation with good semantic naming via the type alias.
packages/typescript/ai-openrouter/src/tools/function-tool.ts (1)
1-11: LGTM!The FunctionTool interface properly models the OpenRouter function tool format.
packages/typescript/ai-openrouter/src/model-meta.ts (1)
1-11: LGTM!The intentionally empty model array aligns with the PR's approach to avoid maintaining a static model list due to OpenRouter's frequent model additions. The type definitions provide good flexibility for model-specific configuration.
packages/typescript/ai-openrouter/live-tests/tool-test.ts (3)
18-70: LGTM!API key validation and tool definition are well-structured. The extensive argument validation in the execute function is appropriate for a test focused on verifying tool argument passing.
72-142: LGTM!The streaming and tool execution logic is well-implemented with proper error handling for JSON parsing and tool execution.
144-183: LGTM!Comprehensive test validation with clear success/failure criteria and helpful error messages.
packages/typescript/ai-openrouter/live-tests/web-search-test.ts (2)
17-77: LGTM!Clean implementation of web search plugin testing with proper chunk handling for content, done, and error types.
79-100: LGTM!Clear test validation with appropriate success/failure conditions.
packages/typescript/ai-openrouter/live-tests/image-test.ts (4)
17-32: LGTM!API key validation and image URL extraction helper are well-implemented.
34-131: LGTM!Comprehensive test for Gemini image generation with excellent result formatting and validation.
133-228: LGTM!Good coverage for Flux image generation. While there's some duplication with the Gemini test, this is acceptable for test code and improves readability.
230-265: LGTM!Excellent test orchestration with clear result aggregation and reporting.
packages/typescript/ai-openrouter/src/index.ts (1)
1-26: LGTM!The barrel export file is well-organized, correctly using
typefor type-only exports and grouping exports logically by their source modules. The public API surface is clean and appropriately scoped.packages/typescript/ai-openrouter/src/openrouter-adapter.ts (4)
201-203: Throwing inside async generator will be caught correctly.The
throwhere will be caught by the surroundingtry-catchblock (line 247), converting it to an error chunk. This is the expected behavior.
289-299: LGTM!Clear error message explaining the limitation and directing users to appropriate alternatives.
706-717: Environment detection logic is well-structured.The
getEnvironmentfunction properly handles both browser (with injectedwindow.env) and Node.js environments. The fallback chain is appropriate.
444-458: Tool call buffering skip-if-no-id logic is overly defensive but unlikely to cause data loss in practice.The OpenRouter API contract guarantees that
idandfunction.nameare typically present on the first chunk for each tool call;function.argumentsare delivered incrementally across subsequent chunks. The code skips a tool call delta if noidis present and no existing buffer exists (line 451), which means the first chunk must include anidto be buffered. Given the API contract, this edge case—where the first chunk lacks anid—should rarely or never occur. However, if it does happen, the subsequent chunks with theidwill fail to find an existing buffer entry, causing data loss. Consider either: (1) removing theidcheck and buffering bytc.indexalone, since the API guarantees id presence on first chunk anyway, or (2) adding a fallback to buffer incomplete tool calls and merge them when id arrives.packages/typescript/ai-openrouter/src/text/text-provider-options.ts (1)
1-108: Well-documented type definitions.The interfaces are comprehensively documented with JSDoc comments, including default values, value ranges, and links to OpenRouter documentation. This significantly improves developer experience.
| ```typescript | ||
| import { createOpenRouter, type OpenRouterConfig } from "@tanstack/ai-openrouter"; | ||
|
|
||
| const config: OpenRouterConfig = { | ||
| apiKey: process.env.OPENROUTER_API_KEY!, | ||
| baseURL: "https://openrouter.ai/api/v1", // Optional | ||
| httpReferer: "https://your-app.com", // Optional, for rankings | ||
| xTitle: "Your App Name", // Optional, for rankings | ||
| }; | ||
|
|
||
| const adapter = createOpenRouter(config.apiKey, config); | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the createOpenRouter function definition and exports
find packages/typescript/ai-openrouter -name "*.ts" -o -name "*.tsx" | head -20Repository: TanStack/ai
Length of output: 809
🏁 Script executed:
#!/bin/bash
# Search for createOpenRouter function definition
rg -n "export.*createOpenRouter|function createOpenRouter" packages/typescript/ai-openrouter -A 5Repository: TanStack/ai
Length of output: 600
🏁 Script executed:
#!/bin/bash
# Search for createOpenRouter usage in the codebase
rg -n "createOpenRouter\(" --type=ts -B 2 -A 2 | head -100Repository: TanStack/ai
Length of output: 3031
Fix the createOpenRouter call in the example—remove apiKey from the config object.
The function signature is createOpenRouter(apiKey: string, config?: Omit<OpenRouterConfig, 'apiKey'>). The second parameter must exclude apiKey, so either:
- Pass config without
apiKey:createOpenRouter(config.apiKey, { baseURL: "...", httpReferer: "...", xTitle: "..." }) - Or simplify to:
createOpenRouter(process.env.OPENROUTER_API_KEY!, { baseURL: "...", httpReferer: "...", xTitle: "..." })
The current example passes the full config (with apiKey), which violates the type signature.
🤖 Prompt for AI Agents
In docs/adapters/openrouter.md around lines 31 to 42, the example calls
createOpenRouter with a config object that still contains apiKey which violates
the function signature; update the call to pass the API key as the first
argument and pass a config object that omits apiKey (e.g. call
createOpenRouter(process.env.OPENROUTER_API_KEY!, { baseURL: "...", httpReferer:
"...", xTitle: "..." }) or extract apiKey then call
createOpenRouter(config.apiKey, { baseURL: "...", httpReferer: "...", xTitle:
"..." })).
| { | ||
| "name": "@tanstack/ai-openrouter", | ||
| "version": "0.0.1", | ||
| "description": "OpenRouter adapter for TanStack AI", | ||
| "author": "", | ||
| "license": "MIT", | ||
| "repository": { | ||
| "type": "git", | ||
| "url": "git+https://github.com/TanStack/ai.git", | ||
| "directory": "packages/typescript/ai-openrouter" | ||
| }, | ||
| "type": "module", | ||
| "module": "./dist/esm/index.js", | ||
| "types": "./dist/esm/index.d.ts", | ||
| "exports": { | ||
| ".": { | ||
| "types": "./dist/esm/index.d.ts", | ||
| "import": "./dist/esm/index.js" | ||
| } | ||
| }, | ||
| "files": [ | ||
| "dist", | ||
| "src" | ||
| ], | ||
| "scripts": { | ||
| "build": "vite build", | ||
| "clean": "premove ./build ./dist", | ||
| "lint:fix": "eslint ./src --fix", | ||
| "test:build": "publint --strict", | ||
| "test:eslint": "eslint ./src", | ||
| "test:lib": "vitest run", | ||
| "test:lib:dev": "pnpm test:lib --watch", | ||
| "test:types": "tsc" | ||
| }, | ||
| "keywords": [ | ||
| "ai", | ||
| "openrouter", | ||
| "tanstack", | ||
| "adapter" | ||
| ], | ||
| "dependencies": { | ||
| "@tanstack/ai": "workspace:*" | ||
| }, | ||
| "devDependencies": { | ||
| "@vitest/coverage-v8": "4.0.14", | ||
| "vite": "^7.2.4" | ||
| }, | ||
| "peerDependencies": { | ||
| "@tanstack/ai": "workspace:*" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
ls -la | grep -E "^-" | head -20Repository: TanStack/ai
Length of output: 1376
🏁 Script executed:
git ls-files | grep -E "(release|changeset|publish|vite|tsup|\.npmrc)" | head -20Repository: TanStack/ai
Length of output: 858
🏁 Script executed:
ls -la | head -30Repository: TanStack/ai
Length of output: 1905
🏁 Script executed:
find . -name "vite.config.*" -type f 2>/dev/null | head -10Repository: TanStack/ai
Length of output: 486
🏁 Script executed:
git ls-files | grep -E "^[^/]+$" | sortRepository: TanStack/ai
Length of output: 310
🏁 Script executed:
cat package.json | head -50Repository: TanStack/ai
Length of output: 2651
🏁 Script executed:
[ -d ".changeset" ] && ls -la .changeset/ && cat .changeset/config.json 2>/dev/nullRepository: TanStack/ai
Length of output: 670
🏁 Script executed:
find . -path "*/packages/*/package.json" -type f 2>/dev/null | head -5 | while read f; do echo "=== $f ==="; cat "$f" | jq ".peerDependencies // empty" 2>/dev/null || echo "no peerDep"; doneRepository: TanStack/ai
Length of output: 410
🏁 Script executed:
cat pnpm-workspace.yaml 2>/dev/nullRepository: TanStack/ai
Length of output: 236
🏁 Script executed:
cat pnpm-workspace.yaml 2>/dev/null || echo "Checking package.json workspaces..."Repository: TanStack/ai
Length of output: 236
🏁 Script executed:
ls -laRepository: TanStack/ai
Length of output: 2163
🏁 Script executed:
head -100 package.jsonRepository: TanStack/ai
Length of output: 3234
🏁 Script executed:
cat pnpm-workspace.yaml 2>/dev/null || cat .npmrc 2>/dev/null || echo "checking files..."Repository: TanStack/ai
Length of output: 236
🏁 Script executed:
[ -d ".changeset" ] && cat .changeset/config.json || cat package.json | grep -A 5 '"scripts"'Repository: TanStack/ai
Length of output: 498
🏁 Script executed:
cat package.json | jq '.scripts' 2>/dev/nullRepository: TanStack/ai
Length of output: 2076
🏁 Script executed:
find . -type f -name "*.json" -path "*/.changeset/*" 2>/dev/nullRepository: TanStack/ai
Length of output: 79
🏁 Script executed:
ls -1 && echo "---" && ls -d .*/ 2>/dev/null | head -10Repository: TanStack/ai
Length of output: 369
🏁 Script executed:
find . -maxdepth 2 -type d -name ".changeset" 2>/dev/nullRepository: TanStack/ai
Length of output: 67
🏁 Script executed:
find . -name "package.json" -path "*/packages/*" -type f 2>/dev/null | head -10 | while read f; do echo "=== $f ==="; jq '.peerDependencies' "$f" 2>/dev/null; doneRepository: TanStack/ai
Length of output: 1090
🏁 Script executed:
find . -path "*/packages/*/package.json" -type f 2>/dev/null | head -10Repository: TanStack/ai
Length of output: 535
🏁 Script executed:
cat packages/typescript/ai-openrouter/package.json | jq '.peerDependencies'Repository: TanStack/ai
Length of output: 90
🏁 Script executed:
find packages -maxdepth 2 -type f -name "package.json" 2>/dev/null | xargs grep -l peerDependenciesRepository: TanStack/ai
Length of output: 39
🏁 Script executed:
ls -la packages/typescript/ai-openrouter/ 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 591
🏁 Script executed:
find packages/typescript/ai-openrouter -type f 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 1006
🏁 Script executed:
ls -la packages/typescript/ai-openrouter/ 2>/dev/nullRepository: TanStack/ai
Length of output: 591
🌐 Web query:
vitest run watch flag compatibility
💡 Result:
Short answer: Use vitest (or vitest watch) for watch mode and vitest run for a single run. The CLI supports a --watch/-w option, but the documented intent is that "vitest run" is the non-watch single-run command while "vitest" / "vitest watch" (or vitest dev) enter watch mode — so prefer using the matching command rather than combining run + watch. [1][2]
Sources:
- Vitest CLI docs (watch vs run). [1]
- Vitest guide (watch mode details). [2]
References:
[1] https://vitest.dev/guide/cli — CLI commands and options.
[2] https://vitest.dev/guide/features — Watch mode description.
🌐 Web query:
pnpm workspace:* peerDependencies npm publish handling
💡 Result:
Short answer
- When you run pnpm pack / pnpm publish, pnpm rewrites workspace: ranges (including workspace:) in package.json to real semver specs (the target package version for workspace:, or
/^ versions for workspace:/ workspace:^). This applies to dependencies, devDependencies and peerDependencies so the published manifest won’t contain workspace:* tokens. [1][2]
What that means in practice
- Ensure each workspace package has the correct version before publishing (pnpm will substitute that version into any workspace:* peerDependencies). [1]
- Publish order still matters if consumers expect the other packages to exist at the substituted versions on the registry. Use a release flow (publish dependent packages after/updating versions) or a tool that sequences releases.
- You can override fields for the published manifest with publishConfig if you need to change the final package.json that goes to the registry. [3]
Caveats / recent improvements
- pnpm has improved handling for workspace ranges in peerDependencies (allowing wider ranges / workspace in peerDependencies) in recent releases — check your pnpm version if you rely on more advanced workspace range behavior. [4]
- Some pnpm commands (e.g. deploy in some versions) historically didn’t rewrite workspace:* in a copied package.json — test your exact pnpm command/version if you rely on that workflow. [5]
Relevant sources
- pnpm workspaces — publishing behavior (rewrites workspace: ranges on pack/publish). [1]
- pnpm package.json / publishConfig docs. [3]
- pnpm changelog/release notes re: workspace ranges in peerDependencies. [4]
- pnpm GitHub issue about deploy not rewriting workspace:* in package.json (historical caveat). [5]
Sources
[1] pnpm — Workspace / Publishing workspace packages. https://pnpm.io/workspaces (see “Publishing workspace packages”).
[2] pnpm (older docs) — Workspace publishing explanation. https://pnpm.github.io/ (workspace publishing section).
[3] pnpm — package.json / publishConfig. https://pnpm.io/package_json
[4] pnpm release notes / changelog (workspace ranges & peerDependencies). https://newreleases.io/project/npm/pnpm/release/10.1.0
[5] pnpm GitHub issue: pnpm deploy doesn't rewrite workspace:* dependencies in package.json. pnpm/pnpm#6269
If you want, tell me your pnpm version and an example package.json and I’ll show exactly what will be published.
🏁 Script executed:
find . -maxdepth 2 -name "vite.config.*" -type f 2>/dev/nullRepository: TanStack/ai
Length of output: 37
🏁 Script executed:
find . -path "*/.github/workflows/*" -type f 2>/dev/null | head -5Repository: TanStack/ai
Length of output: 145
🏁 Script executed:
ls -la .github/workflows/ 2>/dev/null || echo "No workflows"Repository: TanStack/ai
Length of output: 344
🏁 Script executed:
find .github -type f 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 331
🏁 Script executed:
cat pnpm-workspace.yaml 2>/dev/nullRepository: TanStack/ai
Length of output: 236
🏁 Script executed:
cat package.jsonRepository: TanStack/ai
Length of output: 3234
🏁 Script executed:
find . -type f -name "package.json" -path "*/packages/*" | sort | while read f; do echo "=== $f ==="; jq '.peerDependencies' "$f"; doneRepository: TanStack/ai
Length of output: 2623
🌐 Web query:
workspace protocol peerDependencies npm publish
💡 Result:
Short answer
- pnpm and Yarn will rewrite workspace: specifiers (including those in peerDependencies) to real semver ranges when packing/publishing a workspace package. (pnpm.io)
- Lerna-like publishing tools also honor/replace workspace: entries when they pack/publish. (npmjs.com)
- npm’s docs show npm supports running publish in workspaces, but do not document an automatic workspace:→version rewrite the way pnpm/Yarn do; use a pack/dry-run to verify what will be published. (docs.npmjs.com)
Practical implications
- If a package’s peerDependencies use workspace: (e.g. "foo": "workspace:^"), pnpm/Yarn (and many release tools) will replace that with the corresponding version (e.g. "^1.2.3") in the published package.json so consumers see normal semver ranges. (pnpm.io)
- If you build into a dist folder or run publish outside the workspace root, the workspace: rewrite may fail (you’ll get unresolved-workspace-protocol errors) — ensure the package is published from a workspace-aware context or replace workspace: entries before publishing. (github.com)
- Use pnpm pack / npm pack or a publish dry-run to inspect the packaged package.json before publishing to confirm workspace: has been rewritten. (pnpm.io)
If you want, tell me:
- which package manager and publish workflow (npm / pnpm / yarn / lerna / CI) you use, and I’ll give exact commands and a checklist to ensure peerDependencies with workspace: are published correctly.
Citations:
- 1: https://pnpm.io/9.x/workspaces?utm_source=openai
- 2: https://www.npmjs.com/package/%40lerna-lite/publish?utm_source=openai
- 3: https://docs.npmjs.com/cli/v8/commands/npm-publish?utm_source=openai
- 4: https://pnpm.io/9.x/workspaces?utm_source=openai
- 5: building dist for publishing with workspace protocol cannot resolve pnpm/pnpm#9495
- 6: https://pnpm.io/9.x/workspaces?utm_source=openai
🏁 Script executed:
ls -la | grep -E "^\."Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
test -d .changeset && cat .changeset/config.json || echo "no changeset"Repository: TanStack/ai
Length of output: 498
🏁 Script executed:
rg "workspace:\*" 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 974
🏁 Script executed:
find . -type f -name "package.json" | xargs jq '.peerDependencies' 2>/dev/null | grep -v null | head -50Repository: TanStack/ai
Length of output: 1044
🌐 Web query:
vitest run --watch incompatible
💡 Result:
You can’t combine the single-run subcommand with watch. vitest run does a one-off run (no watch); --watch / default behavior is for the main vitest / vitest watch command. Use one of these:
- Single run (CI): vitest run. [1]
- Watch mode (dev): vitest or vitest watch (or vitest --watch). [1][2]
If you intended a non-interactive one-off run that exits, remove --watch (or use --run when calling plain vitest to force exit). [2]
Sources: Vitest CLI docs. [1][2]
🌐 Web query:
vitest documentation run command watch
💡 Result:
Quick summary — commands & watch behavior for Vitest:
- vitest — starts in watch mode by default in a dev (interactive) environment; in CI/non-interactive it runs once. [1]
- vitest watch — explicitly start watch mode (same as vitest). [1]
- vitest run — perform a single (non-watch) run and exit. [1]
- Flags: -w / --watch to enable watch; --standalone to keep Vitest running in background and only run tests when files change; use --run when you want a watch-style command to exit (useful for lint-staged). [1][2]
Sources: Vitest CLI and Watch docs. [1] https://vitest.dev/guide/cli [2] https://vitest.dev/guide/features
🏁 Script executed:
grep -r "vitest" package.json 2>/dev/null | grep -E "version|devDependencies"Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
jq '.devDependencies | keys[]' package.json | grep -i vitestRepository: TanStack/ai
Length of output: 63
🏁 Script executed:
cat package.json | head -200Repository: TanStack/ai
Length of output: 3234
🌐 Web query:
vitest "run" command watch mode documentation
💡 Result:
Summary — Vitest watch vs run
- vitest run — performs a single run (no watch). Use when you want the process to exit after tests. [1]
- vitest (or vitest watch / vitest dev) — starts watch mode (re-runs related tests on file changes). Vitest enters watch mode by default in interactive/dev environments and falls back to run mode in CI/non-interactive environments (when CI is set). [1][2]
- --run — commonly used with subcommands (e.g., vitest related --run) to force a single run so the command exits (useful in lint-staged/CI). [1]
- Other useful flag: --standalone keeps Vitest running in background and only runs tests when they change. [2]
Examples
- Single run: vitest run or vitest --run
- Watch mode: vitest or vitest watch or vitest dev
- Force single run for related: vitest related src/foo.ts --run. [1][2]
Sources: Vitest CLI and Features docs. [1] [2]
References
[1] Vitest CLI (official docs).
[2] Vitest Guide — Watch mode / Features (official docs).
🏁 Script executed:
cat packages/typescript/ai-openrouter/package.json | jq '.scripts'Repository: TanStack/ai
Length of output: 318
🌐 Web query:
vitest 4.0 "run" flag with watch command
💡 Result:
Short answer: In Vitest v4 the --run flag makes the process perform a single run (no watch). Use either vitest run or vitest --run (or pass --run to vitest watch/related) to force non-watch behavior so the command exits after running tests (useful in CI or lint-staged). Examples:
- vitest run # explicit single run.
- vitest --run # same as above.
- vitest related --run ... # run related tests and exit.
References: Vitest CLI / watch docs. [1][2][3]
Sources
- Vitest docs — Features / Watch mode. [1]
- Vitest CLI documentation (commands, --run usage, lint-staged tip). [2]
- Vitest config CLI options (watch default / --run). [3]
Fix test:lib:dev watch mode.
The current command pnpm test:lib --watch expands to vitest run --watch, but vitest run is incompatible with the --watch flag. Use:
- "test:lib:dev": "pnpm test:lib --watch",
+ "test:lib:dev": "vitest",The peerDependencies using workspace:* is fine—pnpm (v10.17.0) automatically rewrites workspace protocols to real semver versions when publishing via changesets.
🤖 Prompt for AI Agents
In packages/typescript/ai-openrouter/package.json around lines 1 to 51, the
script "test:lib:dev": "pnpm test:lib --watch" expands to "vitest run --watch"
which is invalid; fix by either invoking vitest in watch mode directly (e.g. set
"test:lib:dev" to "vitest --watch") or forward the watch flag to the underlying
script by changing "test:lib:dev" to "pnpm test:lib -- --watch" so the --watch
reaches vitest correctly.
| ## Bonus: TanStack Start Integration | ||
|
|
||
| TanStack AI works with **any** framework (Next.js, Express, Remix, etc.). | ||
|
|
||
| **With TanStack Start**, you get a bonus: share implementations between AI tools and server functions with `createServerFnTool`: | ||
|
|
||
| ```typescript | ||
| import { createServerFnTool } from '@tanstack/ai-react' | ||
|
|
||
| // Define once, get AI tool AND server function (TanStack Start only) | ||
| const getProducts = createServerFnTool({ | ||
| name: 'getProducts', | ||
| inputSchema: z.object({ query: z.string() }), | ||
| execute: async ({ query }) => db.products.search(query), | ||
| }) | ||
|
|
||
| // Use in AI chat | ||
| chat({ tools: [getProducts.server] }) | ||
|
|
||
| // Call directly from components (no API endpoint needed!) | ||
| const products = await getProducts.serverFn({ query: 'laptop' }) | ||
| ``` | ||
|
|
||
| No duplicate logic, full type safety, automatic validation. The `serverFn` feature requires TanStack Start. See [docs](https://tanstack.com/ai) for details. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix markdown heading level + code sample missing z import.
- Markdownlint is right: after
# TanStack AI, the next heading level should be##(or drop the heading styling for the “Read the docs” line). - The snippet uses
z.object(...)but doesn’t importz.
-### <a href="https://tanstack.com/ai">Read the docs →</b></a>
+## <a href="https://tanstack.com/ai">Read the docs →</b></a>
@@
```typescript
import { createServerFnTool } from '@tanstack/ai-react'
+import { z } from 'zod'🤖 Prompt for AI Agents
In packages/typescript/ai-openrouter/README.md around lines 49 to 73, adjust the
markdown heading to ensure it is a second-level heading (##) immediately after
the top-level "# TanStack AI" header (or remove heading styling for the “Read
the docs” line) and update the code example to include the missing zod import by
adding an import for z (import { z } from 'zod') alongside the existing
createServerFnTool import so the z.object usage resolves.
| export interface OpenRouterTextMetadata {} | ||
|
|
||
| export interface OpenRouterImageMetadata { | ||
| detail?: 'auto' | 'low' | 'high' | ||
| } | ||
|
|
||
| export interface OpenRouterAudioMetadata {} | ||
|
|
||
| export interface OpenRouterVideoMetadata {} | ||
|
|
||
| export interface OpenRouterDocumentMetadata {} | ||
|
|
||
| export interface OpenRouterMessageMetadataByModality { | ||
| text: OpenRouterTextMetadata | ||
| image: OpenRouterImageMetadata | ||
| audio: OpenRouterAudioMetadata | ||
| video: OpenRouterVideoMetadata | ||
| document: OpenRouterDocumentMetadata | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Biome “noEmptyInterface” errors: replace empty interfaces with type aliases.
These empty interfaces will fail Biome as configured. Suggested minimal change:
-export interface OpenRouterTextMetadata {}
+export type OpenRouterTextMetadata = Record<string, never>
@@
-export interface OpenRouterAudioMetadata {}
+export type OpenRouterAudioMetadata = Record<string, never>
@@
-export interface OpenRouterVideoMetadata {}
+export type OpenRouterVideoMetadata = Record<string, never>
@@
-export interface OpenRouterDocumentMetadata {}
+export type OpenRouterDocumentMetadata = Record<string, never>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export interface OpenRouterTextMetadata {} | |
| export interface OpenRouterImageMetadata { | |
| detail?: 'auto' | 'low' | 'high' | |
| } | |
| export interface OpenRouterAudioMetadata {} | |
| export interface OpenRouterVideoMetadata {} | |
| export interface OpenRouterDocumentMetadata {} | |
| export interface OpenRouterMessageMetadataByModality { | |
| text: OpenRouterTextMetadata | |
| image: OpenRouterImageMetadata | |
| audio: OpenRouterAudioMetadata | |
| video: OpenRouterVideoMetadata | |
| document: OpenRouterDocumentMetadata | |
| } | |
| export type OpenRouterTextMetadata = Record<string, never> | |
| export interface OpenRouterImageMetadata { | |
| detail?: 'auto' | 'low' | 'high' | |
| } | |
| export type OpenRouterAudioMetadata = Record<string, never> | |
| export type OpenRouterVideoMetadata = Record<string, never> | |
| export type OpenRouterDocumentMetadata = Record<string, never> | |
| export interface OpenRouterMessageMetadataByModality { | |
| text: OpenRouterTextMetadata | |
| image: OpenRouterImageMetadata | |
| audio: OpenRouterAudioMetadata | |
| video: OpenRouterVideoMetadata | |
| document: OpenRouterDocumentMetadata | |
| } |
🧰 Tools
🪛 Biome (2.1.2)
[error] 1-1: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
[error] 7-7: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
[error] 9-9: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
[error] 11-11: An empty interface is equivalent to {}.
Safe fix: Use a type alias instead.
(lint/suspicious/noEmptyInterface)
🤖 Prompt for AI Agents
In packages/typescript/ai-openrouter/src/message-types.ts lines 1-19, Biome's
noEmptyInterface linting fails because several interfaces are empty; replace
each empty interface with a type alias (e.g., `type OpenRouterTextMetadata =
Record<string, never>` and similarly for audio, video, document) and update
OpenRouterMessageMetadataByModality to reference those type aliases instead of
interfaces; keep OpenRouterImageMetadata as an interface (or convert to a type
if you prefer) and preserve the optional `detail` union.
| private mapOptions(options: ChatOptions): OpenRouterRequest { | ||
| const providerOptions = options.providerOptions as | ||
| | Omit<InternalTextProviderOptions, 'model' | 'messages' | 'tools'> | ||
| | undefined | ||
|
|
||
| const request: OpenRouterRequest = { | ||
| model: options.model, | ||
| messages: this.convertMessages(options.messages), | ||
| temperature: options.options?.temperature, | ||
| max_tokens: options.options?.maxTokens, | ||
| top_p: options.options?.topP, | ||
| ...providerOptions, | ||
| tools: options.tools | ||
| ? convertToolsToProviderFormat(options.tools) | ||
| : undefined, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spread operator may allow providerOptions to override critical fields.
The spread ...providerOptions (line 575) occurs after setting model, messages, temperature, etc., which means any matching keys in providerOptions would override those values. While InternalTextProviderOptions excludes model | messages | tools, the runtime type assertion doesn't enforce this.
Consider explicitly excluding known keys or validating the spread doesn't contain restricted properties:
+ const { model: _m, messages: _msg, tools: _t, ...safeProviderOptions } = providerOptions || {}
+
const request: OpenRouterRequest = {
model: options.model,
messages: this.convertMessages(options.messages),
temperature: options.options?.temperature,
max_tokens: options.options?.maxTokens,
top_p: options.options?.topP,
- ...providerOptions,
+ ...safeProviderOptions,🤖 Prompt for AI Agents
In packages/typescript/ai-openrouter/src/openrouter-adapter.ts around lines 564
to 579, the current spread of providerOptions into the request can accidentally
override critical fields (model, messages, tools, temperature, max_tokens,
top_p) at runtime; instead, filter or pick only allowed keys from
providerOptions before spreading (or explicitly delete/omit any of the
restricted keys if present) so that model/messages/tools and other request
fields set earlier cannot be overridden by providerOptions; implement a short
runtime whitelist/omit step (or validate and throw) and then merge the sanitized
providerOptions into the request.
| * Force the model to respond in a specific format. | ||
| */ | ||
| response_format?: { type: 'json_object' } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenRouter API response_format documentation json_object json_schema
💡 Result:
Short summary — how OpenRouter's response_format works (json_object / json_schema)
- Parameter: response_format — a map you pass in the request to force structured outputs. [1][2].
- json_object: set response_format = { "type": "json_object" } to require the model to return any valid JSON object (useful when you only need valid JSON, not a specific schema). Also pair with an explicit system/user instruction to produce JSON. [1][3].
- json_schema (aka structured outputs): set response_format = { "type": "json_schema", "json_schema": { ... } } where json_schema contains name, schema (a JSON Schema object), and optional flags like strict. The model’s output will be validated against that schema; streaming support and model compatibility vary by provider/model. [2][4].
- Behavior/opts: some docs mention a structured_outputs boolean/flag and provider/model compatibility; not all models support response_format or both response_format and tools simultaneously — check the model’s supported parameters. [1][2][4].
- Example shape (conceptual):
{
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "weather",
"strict": true,
"schema": { "type":"object", "properties": { "location": {"type":"string"}, "temperature":{"type":"number"} }, "required":["location","temperature"] }
}
}
}
(Model returns JSON matching that schema). [2]
Sources
- OpenRouter API parameters / response_format description. [1]
- OpenRouter structured outputs (json_schema) docs and example. [2]
- OpenRouter API reference / compatibility notes. [4]
References:
[1] https://openrouter-api.yestool.org/docs/api-reference/parameters
[2] https://openrouter.ai/docs/features/structured-outputs
[4] https://openrouter.ai/docs/api-reference/overview
🏁 Script executed:
# Check the actual type definition in the file
cat -n packages/typescript/ai-openrouter/src/text/text-provider-options.ts | sed -n '180,190p'Repository: TanStack/ai
Length of output: 467
🌐 Web query:
OpenAI API response_format json_schema structured outputs types
💡 Result:
Summary — OpenAI Structured Outputs (response_format: json_schema)
-
What it is: a response_format option that lets you supply a JSON Schema (or SDK types like Pydantic/Zod) so the model returns JSON that adheres to that schema. Use when you want the assistant’s reply itself to be structured (not a function/tool call). [1][2]
-
How to enable (overview):
- In Responses/Chat APIs set text.format / response_format to type: "json_schema" and include your schema; set strict: true to require schema adherence. SDKs convert Pydantic/Zod to the supported schema automatically. Example usage shown in the OpenAI docs. [1][2][3]
-
Models that support it: supported on recent GPT‑4o models (examples: gpt-4o-mini, gpt-4o-2024-08-06 / gpt-4o-mini-2024-07-18) and the structured-outputs feature is available alongside function-calling for models that support tools. Check the docs for exact model availability for your account/region. [1][2]
-
Supported JSON Schema features/types (subset):
- Primitives: string, number, boolean, integer
- Composites: object, array
- Enum and anyOf
- additionalProperties, required, nested objects/arrays are supported as shown in examples. (Structured Outputs supports a subset of JSON Schema — consult docs for exact constraints.) [1]
-
Behavior & edge cases:
- With strict: true the model’s output will match the schema or return a refusal token if it refuses; otherwise you may get valid JSON that doesn’t match your schema (JSON mode vs Structured Outputs distinction). Handle refusals and interrupted outputs (finish_reason) in your app. [1][2]
References
[1] OpenAI — Structured model outputs (docs, supported schemas & examples).
[2] OpenAI — Introducing Structured Outputs in the API (blog/announcement, examples).
[3] OpenAI — Migrate to the Responses API (notes on text.format vs response_format).
Add support for json_schema and expand response_format type definition.
The response_format field currently only supports { type: 'json_object' }, but OpenRouter's API also supports { type: 'json_schema', json_schema: {...} } for structured outputs. Update the type to:
response_format?:
| { type: 'json_object' }
| { type: 'json_schema'; json_schema: { name: string; schema: Record<string, unknown>; strict?: boolean } }This allows users to leverage OpenRouter's structured output capabilities with schema validation.
🤖 Prompt for AI Agents
In packages/typescript/ai-openrouter/src/text/text-provider-options.ts around
lines 184 to 186, the response_format type only allows { type: 'json_object' }
but must also support OpenRouter's json_schema form; update the type to a union
that includes { type: 'json_object' } OR { type: 'json_schema'; json_schema: {
name: string; schema: Record<string, unknown>; strict?: boolean } } so callers
can provide schema name, the schema object, and optional strict flag for
structured outputs.
| messages: Array<{ | ||
| role: 'user' | 'assistant' | 'system' | 'tool' | ||
| content: | ||
| | string | ||
| | Array<{ | ||
| type: 'text' | 'image_url' | ||
| text?: string | ||
| image_url?: { | ||
| url: string | ||
| detail?: 'auto' | 'low' | 'high' | ||
| } | ||
| }> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Message content type is narrower than adapter support.
The InternalTextProviderOptions.messages content array only supports 'text' | 'image_url' types, but the adapter's convertContentParts method (in openrouter-adapter.ts) handles audio_url, video_url, and document_url as well. This type definition doesn't reflect the full capability.
Consider extending the content type to match the adapter's actual support:
content:
| string
| Array<{
- type: 'text' | 'image_url'
+ type: 'text' | 'image_url' | 'audio_url' | 'video_url' | 'document_url'
text?: string
image_url?: {
url: string
detail?: 'auto' | 'low' | 'high'
}
+ audio_url?: { url: string }
+ video_url?: { url: string }
+ document_url?: { url: string }
}>🤖 Prompt for AI Agents
In packages/typescript/ai-openrouter/src/text/text-provider-options.ts around
lines 283 to 294, the message content type is too narrow (only 'text' |
'image_url') while the adapter handles audio_url, video_url and document_url;
update the union to include 'audio_url' | 'video_url' | 'document_url' and add
corresponding optional fields (e.g., audio_url?: { url: string; detail?:
'auto'|'low'|'high' }, video_url?: { url: string; detail?: 'auto'|'low'|'high'
}, document_url?: { url: string; detail?: 'auto'|'low'|'high' }) so the type
matches convertContentParts; ensure the updated type is exported/used wherever
InternalTextProviderOptions.messages appears.
Alright, should I resolve the CodeRabbit suggestions now, or should I wait for that as well? |

🎯 Changes
This PR adds an OpenRouter adapter.
Since OpenRouter frequently adds new models, there is no typed list of models.
If anyone has suggestions for a better approach, I’d be happy to implement it.
I’m happy to make any changes if needed.
✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
Documentation
New Features
✏️ Tip: You can customize this high-level summary in your review settings.