Skip to content

Conversation

@AlemTuzlak
Copy link
Contributor

@AlemTuzlak AlemTuzlak commented Dec 10, 2025

🎯 Changes

Split adapters into:

  • text
  • embed
  • summarize

Added in generate function, how do we handle this from ehre? Do we want to deprecate the old openai() approach?

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • New Features

    • Tree-shakeable provider adapters (text, embed, summarize, image, tts, transcription, video) for smaller bundles; new adapter factories per provider.
    • Image, TTS, transcription and video generation support added; unified ai() entrypoint.
  • Breaking Changes

    • chat() → ai(); chat-related types/helpers renamed to text/* equivalents (ChatOptions → TextOptions, ChatCompletionChunk → TextCompletionChunk).
    • Provider imports changed (e.g. openai() → openaiText(), anthropic() → anthropicText(), gemini* / ollama* variants).
  • Documentation

    • Extensive docs and guides updated/added (adapters, image, TTS, transcription, video, migration).
  • Tests

    • New type- and runtime tests validating adapter surfaces and behaviors.

✏️ Tip: You can customize this high-level summary in your review settings.

@AlemTuzlak AlemTuzlak requested review from a team, jherr and tannerlinsley December 10, 2025 14:19
@nx-cloud
Copy link

nx-cloud bot commented Dec 10, 2025

View your CI Pipeline Execution ↗ for commit 2b56f66

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 6s View ↗
nx run-many --targets=build --exclude=examples/** ✅ Succeeded <1s View ↗

☁️ Nx Cloud last updated this comment at 2025-12-13 17:19:58 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Dec 10, 2025

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@137

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@137

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@137

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@137

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@137

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@137

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@137

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@137

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@137

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@137

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@137

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@137

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@137

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@137

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@137

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@137

commit: 2b56f66

@coderabbitai
Copy link

coderabbitai bot commented Dec 11, 2025

Walkthrough

Splits monolithic provider adapters into tree‑shakeable adapters (Text/Embed/Summarize/Image/TTS/Transcription), renames the core entry from chat() → ai(), and adds an activity-based routing system with typed adapter base classes, schema converters, docs, tests, and example updates.

Changes

Cohort / File(s) Summary
Core activity & ai entry
packages/typescript/ai/src/ai.ts, packages/typescript/ai/src/index.ts, packages/typescript/ai/src/activities/*, packages/typescript/ai/src/activities/index.ts
Adds unified ai() dispatcher and activity modules (text, embedding, summarize, image, video, tts, transcription). Introduces activity types, activityMap routing, and many new activity APIs and base adapter abstractions.
Text adapter framework & types
packages/typescript/ai/src/activities/text/*, packages/typescript/ai/src/types.ts, packages/typescript/ai/src/base-adapter.ts
Introduces Text adapter abstraction (BaseTextAdapter), renames Chat→Text types (ChatOptions→TextOptions, chunks, stream type aliases), adds structured-output support and text engine/activities.
Embedding / Summarize / Image / TTS / Video / Transcription frameworks
packages/typescript/ai/src/activities/{embedding,summarize,image,tts,video,transcription}/*
Adds adapter interfaces, Base*Adapter classes, activity implementations, and export surfaces for each modality.
Provider: OpenAI (tree-shakeable)
packages/typescript/ai-openai/src/adapters/*, packages/typescript/ai-openai/src/index.ts, packages/typescript/ai-openai/src/utils/*, packages/typescript/ai-openai/src/model-meta.ts
Adds OpenAI-specific Text/Embed/Summarize/Image/TTS/Transcription/Video adapters, client utils, schema converter, model exports, and factory functions (create/openai*).
Provider: Anthropic (tree-shakeable)
packages/typescript/ai-anthropic/src/adapters/*, packages/typescript/ai-anthropic/src/index.ts, packages/typescript/ai-anthropic/src/utils/*
Adds Anthropic Text and Summarize adapters, client util, schema converter, exports, and factory helpers; deprecates legacy monolith export.
Provider: Gemini (tree-shakeable)
packages/typescript/ai-gemini/src/adapters/*, packages/typescript/ai-gemini/src/index.ts, packages/typescript/ai-gemini/src/utils/*, packages/typescript/ai-gemini/src/image/*
Adds Gemini Text/Embed/Summarize/Image/TTS adapters, image provider options, schema converter, client util, model metadata, and factories.
Provider: Ollama (tree-shakeable)
packages/typescript/ai-ollama/src/adapters/*, packages/typescript/ai-ollama/src/index.ts, packages/typescript/ai-ollama/src/utils/*
Adds Ollama Text/Embed/Summarize adapters, client util, schema converter, exports, and factory helpers with host handling.
Legacy adapters & compatibility
packages/typescript/ai-openai/src/openai-adapter.ts, packages/typescript/ai-anthropic/src/anthropic-adapter.ts, packages/typescript/ai-gemini/src/gemini-adapter.ts, packages/typescript/ai-ollama/src/ollama-adapter.ts
Legacy monolithic adapters updated to accept TextOptions and annotated as deprecated; preserved exports for compatibility.
Schema converters & utils
packages/typescript/ai-*/src/utils/schema-converter.ts, packages/typescript/ai-*/src/utils/client.ts
Adds provider-specific Zod→JSON Schema converters and client helpers (createClient, getApiKeyFromEnv, generateId, estimateTokens).
Stream & helpers
packages/typescript/ai/src/stream-to-response.ts, packages/typescript/ai/src/stream-to-response.ts
Adds streamToText utility and migrates stream helpers to new locations.
Docs & guides
docs/**, docs/guides/tree-shakeable-adapters.md, docs/adapters/*, docs/getting-started/*, docs/reference/*, docs/config.json
Large documentation update: new tree-shakeable adapters guide, adapter-specific docs updated to new API names (e.g., openaiText), new guides (TTS, Transcription, Video), and reference renames from Chat→Text.
Examples & tests
examples/**, packages/typescript/ai/tests/*, packages/typescript/ai-*/tests/*
Updates examples to use ai() and new provider factories (openaiText, anthropicText, ...), adds/updates tests including type-level tests and adapter-specific test suites.
Package manifests & tool versions
packages/typescript/ai-*/package.json, package.json, .changeset/*, packages/typescript/ai/vite.config.ts
Adds/export subpath for adapters, bumps vite devDependencies, adds zod as peerDependency across provider packages, and adds changeset note.
Misc public-surface moves
packages/typescript/ai/src/index.ts, various moved exports
Reorganized public exports (tools, messages, stream processors, tool-definition moved under activities/text), removed old chat-centric top-level helpers in favor of activity-based exports.

Sequence Diagram

mermaid
sequenceDiagram
autonumber
participant Client
participant ai as ai()
participant Router as activityMap
participant Activity as Activity (text/embedding/...)
participant Adapter
participant ProviderSDK
Note over Client,ai: Client invokes ai() with adapter & options
Client->>ai: ai({ adapter, model, ... })
ai->>Router: resolve adapter.kind
Router->>Activity: dispatch to activity (e.g., textActivity)
Activity->>Adapter: call adapter method (chatStream / createEmbeddings / summarize / generateImages)
Adapter->>ProviderSDK: call provider SDK (OpenAI/Gemini/Anthropic/Ollama)
ProviderSDK-->>Adapter: stream/response
Adapter-->>Activity: normalized result or stream chunks
Activity-->>ai: return/result or AsyncIterable
ai-->>Client: return/stream to caller
Note right of ProviderSDK: Schema converters & utils used when building requests

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Areas to focus review on:

  • Type renames and propagation (Chat→Text): ensure all public types, overloads, and docs remain consistent.
  • ai() routing and activityMap: verify dispatch, error paths, and generics preserve type safety.
  • Provider adapters and factories: confirm API-key/env handling, client creation, and streaming behavior.
  • Schema converters: validate Zod→provider JSON Schema transformations and null/undefined handling.
  • Examples/tests/docs: spot-check representative examples and tests to ensure imports and usages align with new API names.

Poem

🐰 I hopped through code with eager paws,

split adapters into tiny boughs.
chat became ai, concise and neat,
small bundles now—light on their feet.
A carrot of docs, a hoppity cheer, new adapters bloom this festive year!

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description is incomplete. The author checked off completing the Contributing guide and running tests, but left the Release Impact section unchecked without addressing whether a changeset should be generated. Address the Release Impact section: generate a changeset if published code is affected, or explicitly mark as docs/CI/dev-only if not applicable.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat: split up adapters into functionalities and add generate function' clearly and concisely summarizes the main changes: adapter splitting and a new generate function.
Docstring Coverage ✅ Passed Docstring coverage is 92.68% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/splitting-adapters-up

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
README.md (1)

41-73: Fix README example imports to match actual package exports

The new tree-shakeable adapters section has two import path issues that will prevent the example from compiling:

  1. Function name mismatch: The example imports import { generate } from '@tanstack/ai', but the root package only exports ai (not generate). Should be import { ai } from '@tanstack/ai'.

  2. Adapter path mismatch: The example shows import { openaiText } from '@tanstack/ai-openai/adapters', but adapters are re-exported at the package root. Should be import { openaiText } from '@tanstack/ai-openai'.

Update the code snippet to use:

import { openaiText } from '@tanstack/ai-openai'
import { ai } from '@tanstack/ai'
♻️ Duplicate comments (1)
packages/typescript/ai-openai/src/utils/client.ts (1)

45-47: Duplicate: generateId duplication.

This function is identical to implementations in Gemini, Anthropic, and Ollama adapters. See the comment on packages/typescript/ai-gemini/src/utils/client.ts (lines 41-43) for consolidation recommendations.

🧹 Nitpick comments (25)
packages/typescript/ai-anthropic/src/utils/client.ts (1)

43-45: Consider centralizing generateId to eliminate duplication.

This exact implementation is duplicated across multiple provider utilities (OpenAI, Gemini, Ollama) and the base adapters in @tanstack/ai. Consider extracting it to a shared location in @tanstack/ai and re-exporting from there.

Additionally, substring(7) may return fewer than 4 characters for small random values. Using substring(2, 9) or padEnd would ensure consistent length.

-export function generateId(prefix: string): string {
-  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`
+export function generateId(prefix: string): string {
+  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(2, 9)}`
 }
packages/typescript/ai-anthropic/src/adapters/summarize.ts (3)

32-34: Redundant kind declaration.

BaseSummarizeAdapter already sets kind = 'summarize' as const. This redeclaration is unnecessary unless you intend to override with a different value.

 export class AnthropicSummarizeAdapter extends BaseSummarizeAdapter<
   typeof ANTHROPIC_MODELS,
   AnthropicSummarizeProviderOptions
 > {
-  readonly kind = 'summarize' as const
   readonly name = 'anthropic' as const
   readonly models = ANTHROPIC_MODELS

71-97: buildSummarizationPrompt is duplicated across providers.

This method is identical to the implementation in OpenAI.buildSummarizationPrompt (see openai-adapter.ts lines 171-197). Consider extracting it to a shared utility in @tanstack/ai to reduce duplication and ensure consistent behavior across providers.


38-41: Consider passing config to base class.

The constructor passes an empty object to super({}) while BaseSummarizeAdapter accepts SummarizeAdapterConfig with fields like timeout, maxRetries, and headers. If AnthropicSummarizeConfig expands to include these fields, they won't propagate to the base class.

 constructor(config: AnthropicSummarizeConfig) {
-  super({})
+  super(config)
   this.client = createAnthropicClient(config)
 }
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)

11-121: Solid base chat adapter design; consider wiring TModels into ChatOptions for extra safety

The ChatAdapterConfig, ChatAdapter generics, and BaseChatAdapter shape look coherent and extensible: the kind: 'chat' discriminator, type‑only _providerOptions/modalities/metadata fields, and chatStream contract are all aligned with how a unified generate/ai API will discriminate adapters.

If you ever want stricter coupling between supported models and chatStream calls, you could consider tightening the signature to something like:

chatStream: (
  options: ChatOptions<TModels[number], TProviderOptions>,
) => AsyncIterable<StreamChunk>

so options.model can’t drift from models. Not required, but it’s a low‑cost type‑safety win if it doesn’t fight ChatOptions’ current generic constraints.

packages/typescript/ai/src/adapters/index.ts (1)

1-33: Adapter index looks good; consider deriving AdapterKind from AnyAdapter

The base adapter re-exports and AnyAdapter union are a nice central place to hang adapter‑level types.

To reduce the chance of drift if you add another adapter kind later, you could derive AdapterKind from AnyAdapter instead of hard‑coding the union, e.g.:

export type AnyAdapter =
  | ChatAdapter<any, any, any, any, any>
  | EmbeddingAdapter<any, any>
  | SummarizeAdapter<any, any>

export type AdapterKind = AnyAdapter['kind']

Not required, but it keeps the discriminator and the union in sync automatically.

packages/typescript/ai-gemini/src/utils/client.ts (1)

41-43: Consider consolidating generateId across adapter packages.

This identical function is duplicated across Gemini, OpenAI, Anthropic, Ollama, and base adapter classes. Consider extracting it to a shared utility module (e.g., @tanstack/ai/utils) to reduce duplication and ease maintenance.

Example consolidation in packages/typescript/ai/src/utils/id.ts:

export function generateId(prefix: string): string {
  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`
}

Then import from the shared location in each adapter package.

packages/typescript/ai-ollama/src/utils/client.ts (1)

39-41: Consider removing the default parameter for consistency.

This generateId implementation has a default parameter 'msg', while the identical function in OpenAI, Anthropic, and Gemini adapters doesn't have a default. For consistency across the codebase, consider removing the default or applying it uniformly.

Note: This also relates to the code duplication mentioned in the Gemini utils review—consolidating generateId would naturally resolve this inconsistency.

packages/typescript/ai-ollama/src/adapters/text.ts (2)

254-260: Duplicate code in image source handling.

Both branches of the conditional (lines 255-259) execute identical code. The distinction between data and URL sources appears intended but isn't implemented.

Simplify by removing the redundant conditional:

           } else if (part.type === 'image') {
-            if (part.source.type === 'data') {
-              images.push(part.source.value)
-            } else {
-              images.push(part.source.value)
-            }
+            images.push(part.source.value)
           }

Or, if different handling was intended (e.g., prefixing data URIs), implement the distinction.


280-292: Silent failure on JSON parse error.

The empty catch block silently converts invalid JSON arguments to an empty object, which could mask malformed tool call data and cause downstream failures.

Consider logging or preserving the original string:

                   try {
                     parsedArguments = JSON.parse(
                       toolCall.function.arguments,
                     ) as Record<string, unknown>
                   } catch {
-                    parsedArguments = {}
+                    // Log or handle malformed arguments
+                    console.warn(`Failed to parse tool call arguments for ${toolCall.function.name}`)
+                    parsedArguments = { _raw: toolCall.function.arguments }
                   }
packages/typescript/ai-gemini/src/adapters/embed.ts (2)

74-86: Sequential API calls for multiple inputs may be inefficient.

Each input triggers a separate API call in sequence. For batches of inputs, this adds latency. Consider parallelizing with Promise.all if the Gemini API supports concurrent requests, or check if batch embedding is available.

-    for (const input of inputs) {
-      const response = await this.client.models.embedContent({
+    const responses = await Promise.all(
+      inputs.map((input) =>
+        this.client.models.embedContent({
-        model,
-        contents: [{ role: 'user', parts: [{ text: input }] }],
-        config: {
-          outputDimensionality: options.dimensions,
-        },
-      })
+          model,
+          contents: [{ role: 'user', parts: [{ text: input }] }],
+          config: {
+            outputDimensionality: options.dimensions,
+          },
+        }),
+      ),
+    )
 
-      if (response.embeddings?.[0]?.values) {
-        embeddings.push(response.embeddings[0].values)
+    for (const response of responses) {
+      if (response.embeddings?.[0]?.values) {
+        embeddings.push(response.embeddings[0].values)
       }
     }

88-96: Hardcoded usage tokens and inconsistent ID generation.

Usage tokens are hardcoded to 0, which doesn't reflect actual API consumption. Also, Date.now() is used directly instead of the generateId utility used in other adapters (e.g., Ollama summarize adapter uses generateId('sum')).

For consistency with other adapters:

+import { generateId } from '../utils'
+
     return {
-      id: `embed-${Date.now()}`,
+      id: generateId('embed'),
       model,
       embeddings,
       usage: {
-        promptTokens: 0,
-        totalTokens: 0,
+        promptTokens: 0, // TODO: Gemini API may provide token counts in response
+        totalTokens: 0,
       },
     }
packages/typescript/ai-openai/src/adapters/summarize.ts (1)

15-20: Provider options are defined but never used.

OpenAISummarizeProviderOptions defines temperature and maxTokens, but the summarize() method hardcodes temperature: 0.3 (line 53) and uses options.maxLength for max_tokens. Consider either removing these unused options or applying them in the API call.

packages/typescript/ai-gemini/src/adapters/summarize.ts (2)

50-53: Inconsistent inheritance pattern.

This adapter implements SummarizeAdapter directly, while OpenAISummarizeAdapter extends BaseSummarizeAdapter. For consistency and to leverage shared functionality (like the base generateId), consider extending BaseSummarizeAdapter instead.


26-40: Provider options are defined but not applied.

GeminiSummarizeProviderOptions defines generationConfig and safetySettings, but these are not passed to the generateContent call. Either use these options in the API request or remove them from the interface.

     const response = await this.client.models.generateContent({
       model,
       contents: [
         {
           role: 'user',
           parts: [
             { text: `Please summarize the following:\n\n${options.text}` },
           ],
         },
       ],
       config: {
         systemInstruction: systemPrompt,
+        ...options.providerOptions?.generationConfig,
       },
+      safetySettings: options.providerOptions?.safetySettings,
     })
packages/typescript/ai-openai/src/adapters/text.ts (3)

100-107: Remove or replace debug logging.

These console.error statements appear to be debugging artifacts. Consider removing them or using a proper logging abstraction that can be configured for different environments.

     } catch (error: unknown) {
       const err = error as Error
-      console.error('>>> chatStream: Fatal error during response creation <<<')
-      console.error('>>> Error message:', err.message)
-      console.error('>>> Error stack:', err.stack)
-      console.error('>>> Full error:', err)
       throw error
     }

380-400: Debug logging in catch block.

Similar to the earlier comment, the console.log should be removed or replaced with a configurable logging mechanism. Also note the unused eventTypeCounts referenced here.


348-351: Consider a more robust type check.

The type assertion (item as { type: string }).type works but is fragile. Consider using a type guard or checking for the property existence first.

-          const hasFunctionCalls = chunk.response.output.some(
-            (item: unknown) =>
-              (item as { type: string }).type === 'function_call',
-          )
+          const hasFunctionCalls = chunk.response.output.some(
+            (item) =>
+              item && typeof item === 'object' && 'type' in item && item.type === 'function_call',
+          )
packages/typescript/ai-ollama/src/adapters/embed.ts (2)

85-92: Sequential API calls may impact performance for batch embeddings.

Each input triggers a separate client.embeddings() call. If the Ollama API supports batching multiple prompts in a single request, consider leveraging that to reduce latency and overhead. Otherwise, for large input arrays, consider using Promise.all for concurrent requests (with appropriate rate limiting if needed).


100-100: Non-unique ID generation.

Date.now() can produce duplicate IDs for rapid successive calls. Consider using a UUID library or combining with a random suffix:

-      id: `embed-${Date.now()}`,
+      id: `embed-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`,
packages/typescript/ai/src/types.ts (1)

994-1037: Duplication with ChatStreamOptionsForModel.

ChatOptionsForChatAdapter (lines 994-1037) is structurally very similar to ChatStreamOptionsForModel (lines 887-937). The main difference is the adapter base type (ChatAdapter vs AIAdapter). Consider whether these could share a common base type or be unified to reduce maintenance overhead as the API evolves.

packages/typescript/ai-gemini/src/adapters/text.ts (1)

157-162: Silent error swallowing in argument merging.

The catch block silently replaces the existing args with the new args when JSON parsing fails, which could mask data corruption issues. Consider logging a warning:

             } catch {
+              // Log or track parse failure for debugging
               toolCallData.args =
                 typeof functionArgs === 'string'
                   ? functionArgs
                   : JSON.stringify(functionArgs)
             }
packages/typescript/ai/src/core/generate.ts (3)

495-497: Dead code - outputSchema is guaranteed to be defined.

This function is only called when chatOptions.outputSchema is truthy (line 347), and outputSchema is destructured from options. This check will never trigger.

-  // Validate against the schema
-  if (!outputSchema) {
-    throw new Error('outputSchema is required for structured output')
-  }
-
   const result = outputSchema.safeParse(parsed)

223-224: Missing newline between type definitions.

There's a missing newline separator before the GenerateReturnType comment, which reduces readability:

-      : never /** Infer the return type based on adapter kind, schema, and stream */
-type GenerateReturnType<
+      : never
+
+/** Infer the return type based on adapter kind, schema, and stream */
+type GenerateReturnType<

550-589: generateSummaryStream doesn't actually stream.

This function awaits the full summarize() result before yielding any chunks, so it provides a streaming interface but not streaming behavior. This may be intentional for API consistency, but consider documenting this limitation or implementing true streaming if the underlying adapter supports it.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 049eb8a and 86fa305.

📒 Files selected for processing (40)
  • .changeset/two-bikes-kneel.md (1 hunks)
  • README.md (2 hunks)
  • docs/guides/tree-shakeable-adapters.md (1 hunks)
  • examples/ts-react-chat/src/routes/api.tanchat.ts (2 hunks)
  • packages/typescript/ai-anthropic/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/index.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/embed.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-gemini/src/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/model-meta.ts (1 hunks)
  • packages/typescript/ai-gemini/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-gemini/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/embed.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-ollama/src/index.ts (1 hunks)
  • packages/typescript/ai-ollama/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-ollama/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/embed.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/summarize.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-openai/src/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/openai-adapter.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/client.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/index.ts (1 hunks)
  • packages/typescript/ai/package.json (1 hunks)
  • packages/typescript/ai/src/adapters/base-chat-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/index.ts (1 hunks)
  • packages/typescript/ai/src/core/generate.ts (1 hunks)
  • packages/typescript/ai/src/index.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (2 hunks)
  • packages/typescript/ai/tests/generate-types.test-d.ts (1 hunks)
  • packages/typescript/ai/tests/generate.test.ts (1 hunks)
  • packages/typescript/ai/vite.config.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (16)
packages/typescript/ai-gemini/src/model-meta.ts (1)
packages/typescript/ai-gemini/src/index.ts (2)
  • GeminiModels (38-38)
  • GEMINI_MODELS (37-37)
packages/typescript/ai/src/adapters/index.ts (3)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • ChatAdapter (30-70)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • EmbeddingAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • SummarizeAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai/tests/generate-types.test-d.ts (3)
packages/typescript/ai/src/adapters/index.ts (3)
  • BaseChatAdapter (7-7)
  • BaseEmbeddingAdapter (14-14)
  • BaseSummarizeAdapter (21-21)
packages/typescript/ai/src/types.ts (6)
  • ChatOptions (554-588)
  • StreamChunk (678-686)
  • EmbeddingOptions (722-726)
  • EmbeddingResult (728-736)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai/src/core/generate.ts (1)
  • ai (315-393)
packages/typescript/ai-gemini/src/utils/client.ts (7)
packages/typescript/ai-gemini/src/utils/index.ts (4)
  • GeminiClientConfig (5-5)
  • createGeminiClient (2-2)
  • getGeminiApiKeyFromEnv (4-4)
  • generateId (3-3)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-anthropic/src/utils/client.ts (1)
  • generateId (43-45)
packages/typescript/ai-ollama/src/utils/client.ts (1)
  • generateId (39-41)
packages/typescript/ai-openai/src/utils/client.ts (1)
  • generateId (45-47)
packages/typescript/ai-anthropic/src/utils/client.ts (6)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-gemini/src/utils/client.ts (1)
  • generateId (41-43)
packages/typescript/ai-ollama/src/utils/client.ts (1)
  • generateId (39-41)
packages/typescript/ai-openai/src/utils/client.ts (1)
  • generateId (45-47)
packages/typescript/ai-openai/src/adapters/embed.ts (6)
packages/typescript/ai-openai/src/utils/client.ts (4)
  • OpenAIClientConfig (3-7)
  • createOpenAIClient (12-18)
  • generateId (45-47)
  • getOpenAIApiKeyFromEnv (24-40)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_EMBEDDING_MODELS (1664-1668)
packages/typescript/ai/src/types.ts (2)
  • EmbeddingOptions (722-726)
  • EmbeddingResult (728-736)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-openai/src/openai-adapter.ts (2)
packages/typescript/ai-anthropic/src/text/text-provider-options.ts (1)
  • validateTextProviderOptions (198-204)
packages/typescript/ai-openai/src/text/text-provider-options.ts (1)
  • validateTextProviderOptions (320-325)
packages/typescript/ai-ollama/src/adapters/text.ts (4)
packages/typescript/ai-ollama/src/index.ts (5)
  • OllamaTextModels (8-8)
  • OllamaTextModel (12-12)
  • OllamaTextProviderOptions (13-13)
  • OllamaTextAdapter (7-7)
  • Ollama (49-49)
packages/typescript/ai-ollama/src/utils/index.ts (3)
  • createOllamaClient (2-2)
  • generateId (4-4)
  • getOllamaHostFromEnv (5-5)
packages/typescript/ai/src/types.ts (3)
  • StreamChunk (678-686)
  • ToolCall (68-75)
  • Tool (326-427)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai-gemini/src/adapters/summarize.ts (4)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (2)
  • SummarizeAdapter (22-41)
  • generateId (68-70)
packages/typescript/ai/src/types.ts (3)
  • SummarizeAdapter (1040-1040)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai-gemini/src/utils/client.ts (3)
  • createGeminiClient (10-14)
  • generateId (41-43)
  • getGeminiApiKeyFromEnv (20-36)
packages/typescript/ai-gemini/src/utils/index.ts (3)
  • createGeminiClient (2-2)
  • generateId (3-3)
  • getGeminiApiKeyFromEnv (4-4)
packages/typescript/ai-openai/src/utils/client.ts (6)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • generateId (119-121)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • generateId (66-68)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • generateId (68-70)
packages/typescript/ai-anthropic/src/utils/client.ts (1)
  • generateId (43-45)
packages/typescript/ai-gemini/src/utils/client.ts (1)
  • generateId (41-43)
packages/typescript/ai-ollama/src/utils/client.ts (1)
  • generateId (39-41)
packages/typescript/ai-ollama/src/utils/client.ts (3)
packages/typescript/ai-anthropic/src/utils/client.ts (1)
  • generateId (43-45)
packages/typescript/ai-gemini/src/utils/client.ts (1)
  • generateId (41-43)
packages/typescript/ai-openai/src/utils/client.ts (1)
  • generateId (45-47)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (2)
packages/typescript/ai/src/adapters/index.ts (3)
  • EmbeddingAdapterConfig (16-16)
  • EmbeddingAdapter (15-15)
  • BaseEmbeddingAdapter (14-14)
packages/typescript/ai/src/types.ts (3)
  • EmbeddingAdapter (1040-1040)
  • EmbeddingOptions (722-726)
  • EmbeddingResult (728-736)
packages/typescript/ai-anthropic/src/adapters/summarize.ts (4)
packages/typescript/ai-anthropic/src/utils/client.ts (3)
  • AnthropicClientConfig (3-5)
  • createAnthropicClient (10-16)
  • getAnthropicApiKeyFromEnv (22-38)
packages/typescript/ai-anthropic/src/utils/index.ts (3)
  • AnthropicClientConfig (5-5)
  • createAnthropicClient (2-2)
  • getAnthropicApiKeyFromEnv (4-4)
packages/typescript/ai-anthropic/src/model-meta.ts (1)
  • ANTHROPIC_MODELS (363-373)
packages/typescript/ai/src/types.ts (2)
  • SummarizationOptions (703-709)
  • SummarizationResult (711-720)
packages/typescript/ai/src/types.ts (4)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • ChatAdapter (30-70)
packages/typescript/ai/src/adapters/index.ts (3)
  • ChatAdapter (8-8)
  • EmbeddingAdapter (15-15)
  • SummarizeAdapter (22-22)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • EmbeddingAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • SummarizeAdapter (22-41)
packages/typescript/ai-openai/src/adapters/text.ts (1)
packages/typescript/ai-openai/src/utils/index.ts (4)
  • OpenAIClientConfig (5-5)
  • createOpenAIClient (2-2)
  • generateId (4-4)
  • getOpenAIApiKeyFromEnv (3-3)
🪛 LanguageTool
docs/guides/tree-shakeable-adapters.md

[style] ~173-~173: This phrase is redundant. Consider writing “eliminated”.
Context: ... your bundle - Unused adapter types are completely eliminated - Smaller bundles lead to faster load t...

(COMPLETELY_ANNIHILATE)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (53)
packages/typescript/ai/package.json (1)

20-23: No changes needed. The "./adapters" export path correctly re-exports all three adapter types (chat, embedding, summarize) through named exports in adapters/index.ts, enabling proper tree-shaking. The structure follows the established pattern and does not require separate sub-export paths.

Likely an incorrect or invalid review comment.

packages/typescript/ai/vite.config.ts (1)

32-36: New adapter entry point correctly configured for tree-shaking.

The addition of './src/adapters/index.ts' as a separate entry point is properly set up. The file exists and exports the three adapter types (chat, embedding, summarize) as intended. The package.json exports field correctly exposes the adapters entry point at "./adapters", enabling independent imports and tree-shaking of adapter functionality.

packages/typescript/ai-openai/src/openai-adapter.ts (1)

508-512: LGTM - validation now includes model context.

The expansion of validateTextProviderOptions to include model: options.model alongside input and providerOptions is appropriate. This enables model-specific validation rules (e.g., checking if certain provider options are compatible with the selected model).

packages/typescript/ai-openai/src/utils/index.ts (1)

1-6: LGTM - clean barrel export for OpenAI utilities.

The re-export pattern is appropriate for consolidating client utilities and enabling tree-shakeable imports.

packages/typescript/ai-anthropic/src/utils/index.ts (1)

1-6: LGTM - consistent with OpenAI utility barrel pattern.

The re-export structure mirrors the OpenAI package, maintaining consistency across provider adapters.

packages/typescript/ai-anthropic/src/utils/client.ts (2)

22-38: LGTM - environment detection for API key retrieval.

The fallback chain from globalThis.window.env to process.env handles both browser and Node.js environments appropriately. The error message is clear and actionable.


3-16: LGTM - clean client factory implementation.

The AnthropicClientConfig interface and createAnthropicClient factory follow the established pattern from other provider adapters.

packages/typescript/ai/src/adapters/base-summarize-adapter.ts (2)

22-41: LGTM - well-designed adapter interface with discriminator.

The kind: 'summarize' discriminator enables type-safe routing in the generate() function. The generic parameters for TModels and TProviderOptions provide good type inference for concrete implementations.


47-71: LGTM - abstract base class follows established adapter patterns.

The use of declare for the type-only _providerOptions is correct. The structure mirrors BaseChatAdapter and BaseEmbeddingAdapter, maintaining consistency across adapter types.

packages/typescript/ai-anthropic/src/adapters/summarize.ts (1)

103-118: LGTM - factory functions follow established patterns.

The createAnthropicSummarize and anthropicSummarize factories mirror the conventions used in other provider adapters, providing both explicit and environment-based API key options.

.changeset/two-bikes-kneel.md (1)

1-9: Changeset scope and message look appropriate

Minor bumps for the affected @tanstack/ai-* packages and the short description accurately reflect the adapter split for tree‑shaking. No adjustments needed.

packages/typescript/ai-gemini/src/model-meta.ts (1)

771-783: GeminiModels alias correctly captures the model name union

GeminiModels = (typeof GEMINI_MODELS)[number] is the right way to derive the string‑literal union from the model list and lines up with the GeminiTextModel re‑export pattern in index.ts. This should improve ergonomics without changing behavior.

packages/typescript/ai-ollama/src/utils/index.ts (1)

1-7: Ollama utils barrel export is clear and consistent

The utils index cleanly re-exports the client helpers and config type from ./client, matching the pattern used in other provider packages and making the public surface easier to consume. Looks good as-is.

packages/typescript/ai/src/index.ts (1)

4-12: No action required. The review comment's premise is incorrect.

The codebase consistently exports and documents the ai function. All JSDoc examples in core/generate.ts use import { ai }, and the root index.ts correctly exports ai from ./core/generate. There is no documentation (README or guide) claiming users should import generate—no such guide exists in the package, and the README contains no code examples. The API is already consistent.

Likely an incorrect or invalid review comment.

packages/typescript/ai-gemini/src/utils/index.ts (1)

1-6: LGTM!

Clean barrel export consolidating Gemini utilities for easy import.

examples/ts-react-chat/src/routes/api.tanchat.ts (1)

2-3: LGTM!

Import changes align with the new tree-shakeable adapter pattern.

packages/typescript/ai-gemini/src/utils/client.ts (3)

1-5: LGTM!

Clean interface definition for Gemini client configuration.


10-14: LGTM!

Clean factory function for creating Gemini client instances.


20-36: LGTM!

Good environment-aware API key retrieval with clear error messaging and support for both GOOGLE_API_KEY and GEMINI_API_KEY.

packages/typescript/ai/tests/generate.test.ts (1)

1-287: LGTM!

Comprehensive test suite covering all three adapter types (chat, embedding, summarize) with proper mocking and type safety checks. Well-structured and thorough.

packages/typescript/ai-ollama/src/utils/client.ts (4)

1-5: LGTM!

Clean configuration interface for Ollama client.


10-14: LGTM!

Good default host configuration for local Ollama instances.


20-34: LGTM!

Good environment-aware host resolution with sensible localhost fallback.


46-49: LGTM!

Reasonable token estimation heuristic for Ollama. The approximation is clearly documented in the comment.

packages/typescript/ai-openai/src/adapters/embed.ts (5)

1-27: LGTM!

Well-documented interfaces with clear API reference. Good use of base adapter patterns.


35-48: LGTM!

Clean adapter class structure following the base adapter pattern with proper typing.


50-66: LGTM!

Clean implementation of the embeddings method with proper response mapping and sensible model fallback.


81-86: LGTM!

Well-documented factory function with clear example usage.


111-116: LGTM!

Well-documented factory with automatic environment-based API key detection. The example clearly shows the intended usage pattern.

packages/typescript/ai-openai/src/utils/client.ts (3)

1-7: LGTM!

Clean configuration interface with appropriate optional fields for OpenAI SDK configuration.


12-18: LGTM!

Clean factory function properly initializing the OpenAI SDK client.


24-40: LGTM!

Good environment-aware API key retrieval with clear error messaging.

packages/typescript/ai/tests/generate-types.test-d.ts (1)

1-529: LGTM!

Excellent comprehensive type-level test suite. The tests thoroughly cover:

  • Return type inference for all adapter types
  • Model validation and constraints
  • Strict providerOptions typing
  • outputSchema handling with complex nested types
  • Summarize streaming behavior

The use of @ts-expect-error for negative test cases is appropriate and ensures type safety. Well-structured and thorough.

packages/typescript/ai-ollama/src/index.ts (1)

1-53: Well-structured tree-shakeable exports with proper deprecation notice.

The organization is clear with new adapters at the top and deprecated legacy exports clearly marked. The deprecation JSDoc provides helpful migration guidance.

packages/typescript/ai-anthropic/src/index.ts (1)

1-63: Clean export structure with appropriate provider-specific note.

The note on line 23 about Anthropic not supporting embeddings natively is helpful for users. The deprecation pattern is consistent with other adapters.

packages/typescript/ai-ollama/src/adapters/summarize.ts (2)

64-88: Adapter class structure looks good.

The class properly implements the SummarizeAdapter interface with appropriate readonly properties and flexible constructor accepting either host string or client instance.


152-167: Factory functions follow the established pattern.

createOllamaSummarize for explicit host and ollamaSummarize for environment-based configuration aligns with other adapters in the codebase.

packages/typescript/ai-ollama/src/adapters/text.ts (2)

125-132: Streaming implementation looks correct.

The chatStream method properly delegates to the Ollama client with stream: true and processes chunks via the generator.


309-328: Options mapping is clean.

Common options are correctly mapped to Ollama's expected format, with provider-specific options spread appropriately.

packages/typescript/ai-gemini/src/adapters/embed.ts (1)

39-62: Adapter class structure follows the established pattern.

The class properly implements EmbeddingAdapter with readonly properties and flexible constructor accepting either API key or client instance.

packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)

1-69: LGTM! Clean base abstraction for embedding adapters.

The interface and abstract class provide a well-structured foundation for provider-specific embedding adapters. The use of declare for _providerOptions correctly marks it as a type-only property without runtime overhead.

packages/typescript/ai-openai/src/adapters/summarize.ts (2)

90-92: Clarify maxLength semantics.

maxLength is used both as the API's max_tokens parameter (line 52) and in the prompt as a hint. The prompt says "tokens" but users may expect word or character counts. Consider clarifying the expected unit in the documentation or interface definition.


43-67: Good defensive handling of API response.

The optional chaining on response.choices[0]?.message.content and fallback values for usage tokens handle potential undefined responses gracefully.

packages/typescript/ai-openai/src/index.ts (1)

1-62: Well-organized export structure with clear deprecation path.

The separation between new tree-shakeable adapters and deprecated legacy exports is clear. The deprecation notice with migration guidance is helpful for consumers.

packages/typescript/ai-gemini/src/adapters/summarize.ts (1)

75-115: Summarize method implementation is correct.

The method properly builds the prompt, calls the Gemini API, extracts usage metadata, and returns a well-formed SummarizationResult.

packages/typescript/ai-openai/src/adapters/text.ts (2)

110-401: Comprehensive streaming implementation.

The stream processing handles various OpenAI response event types correctly, including content deltas, reasoning, tool calls, and errors. The deduplication logic using hasStreamedContentDeltas and hasStreamedReasoningDeltas flags appropriately prevents duplicate content emission.


450-537: Message conversion logic is well-structured.

The convertMessagesToInput method correctly handles different message roles (tool, assistant, user) and converts them to OpenAI's expected format. The tool call serialization and multimodal content handling are properly implemented.

packages/typescript/ai-ollama/src/adapters/embed.ts (1)

63-73: LGTM!

The constructor correctly handles both string host and pre-configured client patterns, providing flexibility for different use cases.

packages/typescript/ai-gemini/src/index.ts (1)

1-51: LGTM!

Clean organization with clear separation between new tree-shakeable adapters and deprecated legacy exports. The deprecation JSDoc provides helpful migration guidance to users.

packages/typescript/ai/src/types.ts (1)

970-989: LGTM!

Well-structured model extraction utilities that correctly infer model unions from each adapter kind, with a unified ExtractModelsFromAnyAdapter for polymorphic usage.

packages/typescript/ai-gemini/src/adapters/text.ts (1)

74-99: LGTM!

Good error handling pattern that catches exceptions and yields a structured error chunk with appropriate metadata.

packages/typescript/ai/src/core/generate.ts (1)

305-393: LGTM!

Well-designed unified ai() function with clean discriminated union routing. The type inference through generics ensures compile-time safety while the runtime switch handles dispatch correctly.

packages/typescript/ai-anthropic/src/adapters/text.ts (1)

520-528: The implementation of getAnthropicApiKeyFromEnv() already handles this correctly. The function explicitly throws a clear error with actionable guidance ("ANTHROPIC_API_KEY is required. Please set it in your environment variables or use the factory function with an explicit API key.") when the environment variable is missing, rather than passing undefined to the adapter. The JSDoc also documents this behavior with @throws. No changes needed.

Comment on lines +1 to +209
# Tree-Shakeable Adapters

TanStack AI provides tree-shakeable adapters that allow you to import only the functionality you need, resulting in smaller bundle sizes.

## Overview

Instead of importing a monolithic adapter that includes chat, embedding, and summarization capabilities all at once, you can now import only the specific functionality you need:

- **Text Adapters** - For chat and text generation
- **Embed Adapters** - For creating embeddings
- **Summarize Adapters** - For text summarization

## Installation

Each provider package (e.g., `@tanstack/ai-openai`, `@tanstack/ai-anthropic`) exports tree-shakeable adapters:

```ts
// Import only what you need
import { openaiText } from '@tanstack/ai-openai'
import { openaiEmbed } from '@tanstack/ai-openai'
import { openaiSummarize } from '@tanstack/ai-openai'
```

## Available Adapters

### OpenAI

```ts
import {
openaiText, // Chat/text generation
openaiEmbed, // Embeddings
openaiSummarize, // Summarization
createOpenAIText,
createOpenAIEmbed,
createOpenAISummarize,
} from '@tanstack/ai-openai'
```

### Anthropic

```ts
import {
anthropicText, // Chat/text generation
anthropicSummarize, // Summarization
createAnthropicText,
createAnthropicSummarize,
} from '@tanstack/ai-anthropic'
```

> Note: Anthropic does not support embeddings natively.
### Gemini

```ts
import {
geminiText, // Chat/text generation
geminiEmbed, // Embeddings
geminiSummarize, // Summarization
createGeminiText,
createGeminiEmbed,
createGeminiSummarize,
} from '@tanstack/ai-gemini'
```

### Ollama

```ts
import {
ollamaText, // Chat/text generation
ollamaEmbed, // Embeddings
ollamaSummarize, // Summarization
createOllamaText,
createOllamaEmbed,
createOllamaSummarize,
} from '@tanstack/ai-ollama'
```

## Usage

### Basic Usage

Each adapter type has two ways to create instances:

1. **Factory function** (recommended for quick setup):

```ts
import { openaiText } from '@tanstack/ai-openai'

const textAdapter = openaiText()

```

2. **Class constructor** (for more control):

```ts
import { createOpenAIText } from '@tanstack/ai-openai/adapters'

const textAdapter = createOpenAIText({
apiKey: 'your-api-key',
// additional configuration...
})
```

### Using the `generate` Function

The `generate` function provides a unified API that adapts based on the adapter type:

```ts
import { generate } from '@tanstack/ai'
import { openaiText, openaiEmbed, openaiSummarize } from '@tanstack/ai-openai/adapters'

// Chat generation - returns AsyncIterable<StreamChunk>
const chatResult = generate({
adapter: openaiText(),
model: 'gpt-4o',
messages: [{ role: 'user', content: [{ type: 'text', content: 'Hello!' }] }],
})

for await (const chunk of chatResult) {
console.log(chunk)
}

// Embeddings - returns Promise<EmbeddingResult>
const embedResult = await generate({
adapter: openaiEmbed(),
model: 'text-embedding-3-small',
input: ['Hello, world!'],
})

console.log(embedResult.embeddings)

// Summarization - returns Promise<SummarizationResult>
const summarizeResult = await generate({
adapter: openaiSummarize(),
model: 'gpt-4o-mini',
text: 'Long text to summarize...',
})

console.log(summarizeResult.summary)
```

### Type Safety

Each adapter provides full type safety for its supported models and options:

```ts
import { openaiText, type OpenAITextModel } from '@tanstack/ai-openai'

const adapter = openaiText()

// TypeScript knows the exact models supported
const model: OpenAITextModel = 'gpt-4o' // ✓ Valid
const model2: OpenAITextModel = 'invalid' // ✗ Type error
```

## Migration from Monolithic Adapters

The legacy monolithic adapters are still available but deprecated:

```ts
// Legacy (deprecated)
import { openai } from '@tanstack/ai-openai'

// New tree-shakeable approach
import { openaiText, openaiEmbed } from '@tanstack/ai-openai/adapters'
```

## Bundle Size Benefits

Using tree-shakeable adapters means:

- Only the code you use is included in your bundle
- Unused adapter types are completely eliminated
- Smaller bundles lead to faster load times

For example, if you only need chat functionality:

```ts
// Only chat code is bundled
import { openaiText } from '@tanstack/ai-openai'
```

vs.

```ts
// All functionality is bundled (chat, embed, summarize)
import { openai } from '@tanstack/ai-openai'
```

## Adapter Types

Each adapter type implements a specific interface:

- `ChatAdapter` - Provides `chatStream()` method for streaming chat responses
- `EmbeddingAdapter` - Provides `createEmbeddings()` method for vector embeddings
- `SummarizeAdapter` - Provides `summarize()` method for text summarization

All adapters have a `kind` property that indicates their type:

```ts
const textAdapter = openaiText()
console.log(textAdapter.kind) // 'chat'

const embedAdapter = openaiEmbed()
console.log(embedAdapter.kind) // 'embedding'

const summarizeAdapter = openaiSummarize()
console.log(summarizeAdapter.kind) // 'summarize'
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find the main @tanstack/ai package entry point
find . -name "package.json" -path "*/packages/ai/*" | head -5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the root index exports for @tanstack/ai
fd -e "index.ts" -e "index.js" . --path "*/packages/ai/*" | grep -E "(^|/)index\.(ts|js)$" | head -10

Repository: TanStack/ai

Length of output: 288


🏁 Script executed:

# Look at what's exported from the main package
rg "export.*generate|export.*ai" --type ts --type js --max-count=20 | grep -E "(packages/ai|core/generate)" | head -20

Repository: TanStack/ai

Length of output: 182


🏁 Script executed:

# Check package.json for @tanstack/ai to understand the entry point
cat packages/ai/package.json 2>/dev/null || find . -name "package.json" -path "*/@tanstack/ai*" -type f | head -3 | xargs cat

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Find the main index file for @tanstack/ai
find packages/typescript/ai/src -name "index.ts" | head -5

Repository: TanStack/ai

Length of output: 178


🏁 Script executed:

# Read the main index file to see exports
cat packages/typescript/ai/src/index.ts

Repository: TanStack/ai

Length of output: 1967


🏁 Script executed:

# Also check for any re-export of generate or ai
rg "export.*from.*generate" packages/typescript/ai/src/index.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the OpenAI package structure
find packages/typescript/ai-openai/src -name "index.ts" | head -5

Repository: TanStack/ai

Length of output: 195


🏁 Script executed:

# Read the main OpenAI index
cat packages/typescript/ai-openai/src/index.ts

Repository: TanStack/ai

Length of output: 1840


🏁 Script executed:

# Check if there's an adapters index
cat packages/typescript/ai-openai/src/adapters/index.ts 2>/dev/null || echo "No adapters index found"

Repository: TanStack/ai

Length of output: 78


🏁 Script executed:

# Check package.json exports for ai-openai
cat packages/typescript/ai-openai/package.json | grep -A 20 '"exports"'

Repository: TanStack/ai

Length of output: 521


🏁 Script executed:

# Find and read the tree-shakeable-adapters guide
find . -name "tree-shakeable-adapters.md" -type f

Repository: TanStack/ai

Length of output: 95


🏁 Script executed:

# Read the guide file
cat docs/guides/tree-shakeable-adapters.md 2>/dev/null | head -250

Repository: TanStack/ai

Length of output: 5084


Fix generate import, clarify adapter import paths, and correct misleading terminology

The guide has several issues that will prevent code samples from working correctly:

  1. generate is not exported from @tanstack/ai

    The code shows import { generate } from '@tanstack/ai', but the actual export is ai. Change to import { ai } from '@tanstack/ai' (or align the docs with whatever becomes the canonical function name).

  2. Adapter import paths are inconsistent and partially unsupported

    • "Installation" and "Available Adapters" sections import from @tanstack/ai-openai
    • "Basic Usage" (class constructor), "Using the generate Function", and "Migration from Monolithic Adapters" import from @tanstack/ai-openai/adapters

    The root package already re-exports all adapters, so the canonical path is @tanstack/ai-openai. The /adapters entrypoint is not officially exported in package.json. Use the root path consistently throughout, or if /adapters is intentional, update package.json and add documentation explaining the distinction.

  3. "Class constructor" is misleading

    In "Basic Usage", option 2 is labeled "Class constructor" but the example shows createOpenAIText({ /* config */ }), which is a factory function. Relabel as "Configured factory" or similar.

  4. Minor: tighten "completely eliminated"

    In "Bundle Size Benefits", change "Unused adapter types are completely eliminated" to "Unused adapter types are eliminated" for more direct phrasing.

🧰 Tools
🪛 LanguageTool

[style] ~173-~173: This phrase is redundant. Consider writing “eliminated”.
Context: ... your bundle - Unused adapter types are completely eliminated - Smaller bundles lead to faster load t...

(COMPLETELY_ANNIHILATE)

🤖 Prompt for AI Agents
docs/guides/tree-shakeable-adapters.md lines 1-209: update the docs so examples
use the correct top-level export and consistent adapter import path, relabel the
misleading "Class constructor" example, and tighten one phrase — change the
`generate` import to the actual exported name (`ai`) or whatever the library
exposes, replace all `/adapters` imports with the canonical root package import
(`@tanstack/ai-openai`) (or document and export `/adapters` if you intend to
keep it), rename "Class constructor" to "Configured factory" (or similar) since
createOpenAIText is a factory, and change "Unused adapter types are completely
eliminated" to "Unused adapter types are eliminated."

Comment on lines +15 to +20
export interface AnthropicSummarizeProviderOptions {
/** Temperature for response generation (0-1) */
temperature?: number
/** Maximum tokens in the response */
maxTokens?: number
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

AnthropicSummarizeProviderOptions is defined but not used.

The temperature and maxTokens fields are declared but never consumed in the summarize method. Line 51 hardcodes temperature: 0.3 instead of using provider options.

Either remove these unused options or wire them into the summarize implementation:

 async summarize(options: SummarizationOptions): Promise<SummarizationResult> {
   const systemPrompt = this.buildSummarizationPrompt(options)
+  const providerOpts = options.providerOptions as AnthropicSummarizeProviderOptions | undefined

   const response = await this.client.messages.create({
     model: options.model,
     messages: [{ role: 'user', content: options.text }],
     system: systemPrompt,
-    max_tokens: options.maxLength || 500,
-    temperature: 0.3,
+    max_tokens: providerOpts?.maxTokens ?? options.maxLength ?? 500,
+    temperature: providerOpts?.temperature ?? 0.3,
     stream: false,
   })

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-anthropic/src/adapters/summarize.ts around lines 15 to
20 and usage at ~51, the AnthropicSummarizeProviderOptions interface declares
temperature and maxTokens but those options are not used — the summarize call
hardcodes temperature: 0.3; to fix, accept and pass the options through: update
the adapter/provider to accept an AnthropicSummarizeProviderOptions parameter
(or read them from the instance config), use options.temperature with a sensible
default (e.g., 0.3) and options.maxTokens when constructing the request to the
Anthropic client, or remove the unused interface if you don’t intend to support
those parameters.

Comment on lines +105 to +117
} catch (error: unknown) {
const err = error as Error & { status?: number; code?: string }
yield {
type: 'error',
id: generateId(this.name),
model: options.model,
timestamp: Date.now(),
error: {
message: err.message || 'Unknown error occurred',
code: err.code || String(err.status),
},
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Error code may result in string "undefined".

If both err.code and err.status are undefined, String(err.status) produces "undefined" as the error code, which isn't meaningful for consumers.

       yield {
         type: 'error',
         id: generateId(this.name),
         model: options.model,
         timestamp: Date.now(),
         error: {
           message: err.message || 'Unknown error occurred',
-          code: err.code || String(err.status),
+          code: err.code ?? (err.status != null ? String(err.status) : 'unknown'),
         },
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error: unknown) {
const err = error as Error & { status?: number; code?: string }
yield {
type: 'error',
id: generateId(this.name),
model: options.model,
timestamp: Date.now(),
error: {
message: err.message || 'Unknown error occurred',
code: err.code || String(err.status),
},
}
}
} catch (error: unknown) {
const err = error as Error & { status?: number; code?: string }
yield {
type: 'error',
id: generateId(this.name),
model: options.model,
timestamp: Date.now(),
error: {
message: err.message || 'Unknown error occurred',
code: err.code ?? (err.status != null ? String(err.status) : 'unknown'),
},
}
}

Comment on lines +169 to +178
const requestParams: InternalTextProviderOptions = {
model: options.model,
max_tokens: maxTokens,
temperature: options.options?.temperature,
top_p: options.options?.topP,
messages: formattedMessages,
system: options.systemPrompts?.join('\n'),
tools: tools,
...validProviderOptions,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n packages/typescript/ai-anthropic/src/adapters/text.ts | head -200

Repository: TanStack/ai

Length of output: 7474


🏁 Script executed:

fd -e spec.ts -e test.ts -e spec.js . --path "*anthropic*" --exec grep -l "systemPrompts\|system" {} \;

Repository: TanStack/ai

Length of output: 288


🏁 Script executed:

find . -path "*anthropic*" -type f \( -name "*.spec.ts" -o -name "*.test.ts" \) | head -20

Repository: TanStack/ai

Length of output: 181


🏁 Script executed:

cat -n ./packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts | grep -A 10 -B 5 "systemPrompts\|system"

Repository: TanStack/ai

Length of output: 1307


🏁 Script executed:

cat -n ./packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts | sed -n '75,165p'

Repository: TanStack/ai

Length of output: 3250


🏁 Script executed:

grep -n "systemPrompts" ./packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg -n "systemPrompts" --type=ts packages/typescript/ai-anthropic/ -B 2 -A 2

Repository: TanStack/ai

Length of output: 999


🏁 Script executed:

rg -n "interface ChatOptions|type ChatOptions" --type=ts -A 15 | head -50

Repository: TanStack/ai

Length of output: 2306


🏁 Script executed:

rg -n "systemPrompts" --type=ts packages/typescript/ai/src/adapters/ -B 3 -A 3 | head -80

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

find packages/typescript -name "*.ts" -path "*adapters*" | grep -E "(openai|google|bedrock)" | head -5

Repository: TanStack/ai

Length of output: 213


🏁 Script executed:

rg -n "systemPrompts|system" packages/typescript/ai-openai/src/adapters/text.ts -B 3 -A 3 | head -80

Repository: TanStack/ai

Length of output: 321


🏁 Script executed:

rg -n "interface.*ProviderOptions|type.*ProviderOptions" packages/typescript/ai-openai/src/ -A 20 | grep -E "(instructions|system|interface|type)" | head -40

Repository: TanStack/ai

Length of output: 3280


🏁 Script executed:

rg -n "interface ExternalTextProviderOptions|type ExternalTextProviderOptions" --type=ts -A 20

Repository: TanStack/ai

Length of output: 5006


🏁 Script executed:

cat -n packages/typescript/ai-anthropic/src/text/text-provider-options.ts | sed -n '1,180p'

Repository: TanStack/ai

Length of output: 6876


🏁 Script executed:

rg -n "'system'" packages/typescript/ai-anthropic/src/adapters/text.ts -B 5 -A 5

Repository: TanStack/ai

Length of output: 345


🏁 Script executed:

git log --oneline -n 20 packages/typescript/ai-anthropic/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 88


Spread of validProviderOptions can override systemPrompts.

The system field is set explicitly from options.systemPrompts?.join('\n') at line 175, but since 'system' is included in the validKeys list (line 140), it can be extracted from providerOptions and will override the joined systemPrompts when spread at line 177. This precedence is undocumented and lacks test coverage; if systemPrompts should always take precedence, reverse the spread order or exclude 'system' from validKeys. If provider-level override is intentional, add a comment and test case documenting this behavior.

🤖 Prompt for AI Agents
In packages/typescript/ai-anthropic/src/adapters/text.ts around lines 169-178,
the spread of validProviderOptions after the explicit system assignment allows a
provider-level 'system' value to override options.systemPrompts, which is likely
unintended; either move the spread before the explicit system property so
options.systemPrompts?.join('\n') takes precedence, or remove 'system' from the
validKeys used to build validProviderOptions, or—if provider override is
intentional—add a clarifying comment and a unit test documenting this
precedence; update code and tests accordingly.

Comment on lines +349 to +358
if (msg.role === 'tool' && msg.toolCallId) {
parts.push({
functionResponse: {
name: msg.toolCallId,
response: {
content: msg.content || '',
},
},
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Gemini API functionResponse name field format tool calling

💡 Result:

Short answer: the functionResponse "name" is the functionDeclaration.name string (the tool/function identifier) and must match exactly the declared function/tool name (no spaces or special characters); the functionResponse payload is a JSON/Struct under "response". Example patterns from Gemini docs:

  • When sending a function result back to the model (Python SDK): types.Part.from_function_response(name=tool_call.name, response={"result": result}). [1]
  • JavaScript example: { name: tool_call.name, response: { result } } included as a user part: { role: 'user', parts: [{ functionResponse: function_response_part }] }. [1]
  • API/Vertex docs: functionResponse has fields name (string) and response (Struct JSON object) — the name must be the functionDeclaration.name and the response holds the function output. Also note thought_signature handling when the model provided one for the functionCall; you must return that signature in the same part when echoing results. [3][2]

References: [1] Google Gemini function-calling docs; [2] Gemini thought-signatures page; [3] Vertex AI GenerateContent functionCall/functionResponse reference.


🏁 Script executed:

# Get context around the code in question to understand the data structures
cd packages/typescript/ai-gemini/src/adapters && sed -n '340,370p' text.ts

Repository: TanStack/ai

Length of output: 777


🏁 Script executed:

# Search for tool-related definitions and patterns in the codebase
rg "toolCallId|functionName|toolCall" packages/typescript/ai-gemini/src --type ts -A 2 -B 2

Repository: TanStack/ai

Length of output: 16228


Use the actual function name, not toolCallId, for functionResponse.name.

According to the Gemini API specification, the functionResponse.name field must be the functionDeclaration.name (the tool/function identifier), not a call ID. The current code uses msg.toolCallId, which violates the API contract and may cause tool result correlation failures. Ensure the tool message includes the actual function name so it can be passed to functionResponse.name.

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 349 to 358,
the code sets functionResponse.name to msg.toolCallId but must use the actual
function/tool identifier; change the assignment to use the tool/function name
(e.g., msg.toolName or msg.tool?.name or msg.functionDeclaration?.name) rather
than the call ID, preserving toolCallId separately if needed; ensure the message
construction passes that function name into functionResponse.name so it matches
the Gemini API contract.

Comment on lines +98 to +101
options: {
temperature: 0.3,
num_predict: options.maxLength ?? 500,
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Semantic mismatch: maxLength used as both token limit and word limit.

num_predict limits output tokens (line 100), but the prompt instructs the model to keep the summary under maxLength words (line 136). Tokens and words are not equivalent—tokens are typically ~0.75 words on average. This inconsistency may cause unexpected truncation or overlong summaries.

Consider clarifying the semantics or using separate parameters:

       options: {
         temperature: 0.3,
-        num_predict: options.maxLength ?? 500,
+        // Rough estimate: ~1.5 tokens per word for safety margin
+        num_predict: options.maxLength ? Math.ceil(options.maxLength * 1.5) : 500,
       },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
options: {
temperature: 0.3,
num_predict: options.maxLength ?? 500,
},
options: {
temperature: 0.3,
// Rough estimate: ~1.5 tokens per word for safety margin
num_predict: options.maxLength ? Math.ceil(options.maxLength * 1.5) : 500,
},
🤖 Prompt for AI Agents
In packages/typescript/ai-ollama/src/adapters/summarize.ts around lines 98 to
101, the code uses options.maxLength as num_predict (token limit) while the
prompt asks for a word limit, causing a mismatch; either convert the word limit
to an estimated token limit before assigning num_predict (e.g., tokens =
Math.ceil((options.maxLength ?? 500) / 0.75)) or add a separate
options.maxTokens param and use that for num_predict, and update the prompt to
reference the same unit (tokens or words) so both the prompt and num_predict use
the same semantic unit.

Comment on lines +144 to +167
const handleToolCall = (toolCall: ToolCall): StreamChunk => {
const actualToolCall = toolCall as ToolCall & {
id: string
function: { index: number }
}
return {
type: 'tool_call',
id: responseId,
model: chunk.model,
timestamp,
toolCall: {
type: 'function',
id: actualToolCall.id,
function: {
name: actualToolCall.function.name || '',
arguments:
typeof actualToolCall.function.arguments === 'string'
? actualToolCall.function.arguments
: JSON.stringify(actualToolCall.function.arguments),
},
},
index: actualToolCall.function.index,
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, find the ToolCall type definition in the codebase
rg -n "interface ToolCall|type ToolCall" --type=ts -B2 -A10

Repository: TanStack/ai

Length of output: 17652


🏁 Script executed:

#!/bin/bash
# Check imports at the top of the file
head -30 packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 703


🏁 Script executed:

#!/bin/bash
# Search for where ToolCall comes from
rg -n "from.*ToolCall|import.*ToolCall" --type=ts packages/typescript/ai-ollama/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check the actual context around lines 144-167
sed -n '140,170p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1066


🏁 Script executed:

#!/bin/bash
# Search for Ollama package type definitions in node_modules or check online docs
rg "interface ToolCall|type ToolCall" packages/typescript/ai-ollama/ -B3 -A8

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check if there are any Ollama type definitions in the repo
find . -name "*.d.ts" -o -name "ollama*" | grep -E "(ollama|types)" | head -20

Repository: TanStack/ai

Length of output: 134


🏁 Script executed:

#!/bin/bash
# Search for how Ollama ToolCall is used elsewhere in the codebase
rg "toolCall\." packages/typescript/ai-ollama/src/ -B2 -A2

Repository: TanStack/ai

Length of output: 3675


🏁 Script executed:

#!/bin/bash
# Let's verify the Ollama package version and check if ToolCall has id
grep -A 5 "ollama" packages/typescript/ai-ollama/package.json

Repository: TanStack/ai

Length of output: 591


🏁 Script executed:

#!/bin/bash
# Check the complete type assertion and how it's being used
sed -n '144,170p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 955


🏁 Script executed:

#!/bin/bash
# Search for Ollama ToolCall type definition online or in types
# Let's check if there are any type stubs or documentation
rg -i "interface ToolCall|export type ToolCall" --type=ts --type=d.ts -B5 -A10 | head -100

Repository: TanStack/ai

Length of output: 83


🏁 Script executed:

#!/bin/bash
# Check if Ollama's ToolCall is actually optional for id by looking at how it's used
# If the code needs to assert id: string, it might mean Ollama's type doesn't guarantee it
rg "ToolCall" packages/typescript/ai-ollama/src/adapters/text.ts -B3 -A3

Repository: TanStack/ai

Length of output: 2128


🏁 Script executed:

#!/bin/bash
# Let's check the Ollama npm package types by examining package.json and looking for type hints
cd packages/typescript/ai-ollama && npm ls ollama 2>/dev/null | head -20

Repository: TanStack/ai

Length of output: 142


🌐 Web query:

Ollama ToolCall type definition JavaScript npm package v0.6.3

💡 Result:

ToolCall (from ollama v0.6.3 — src/interfaces.ts):

export interface ToolCall {
function: {
name: string;
arguments: { [key: string]: any; };
};
}

Source: ollama-js v0.6.3 interfaces.ts. [1]

References
[1] ollama/ollama-js@v0.6.3 — src/interfaces.ts (raw).


Ollama's ToolCall type lacks an id property—the type assertion masks a real issue.

Ollama v0.6.3's ToolCall interface only provides function.name and function.arguments, with no id or type properties. The type assertion on lines 145–148 bypasses TypeScript's type safety but doesn't guarantee id exists at runtime. Line 156 uses actualToolCall.id without a fallback, risking undefined access.

Add a guard or use responseId as the tool call identifier:

const handleToolCall = (toolCall: ToolCall): StreamChunk => {
  return {
    type: 'tool_call',
    id: responseId,
    model: chunk.model,
    timestamp,
    toolCall: {
      type: 'function',
-     id: actualToolCall.id,
+     id: responseId, // Use response ID instead of undefined toolCall.id

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-ollama/src/adapters/text.ts around lines 144 to 167,
the code unsafely asserts a ToolCall has an id and type and then uses
actualToolCall.id which may be undefined at runtime; replace the assertion with
runtime guards and fallbacks: check whether toolCall has an id and a
function.index before using them, use responseId as the toolCall id fallback
when actualToolCall.id is missing, ensure the returned toolCall.type is set to
'function' explicitly, and serialize function.arguments safely (with
JSON.stringify fallback) so no undefined properties are accessed.

let responseId: string | null = null
let model: string = options.model

const eventTypeCounts = new Map<string, number>()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused variable eventTypeCounts.

This map is declared but never populated with event type counts. Either implement the counting logic or remove this variable and its reference in the error logging (line 386).

-    const eventTypeCounts = new Map<string, number>()

And update the catch block accordingly:

       console.log(
         '[OpenAI Adapter] Stream ended with error. Event type summary:',
         {
           totalChunks: chunkCount,
-          eventTypes: Object.fromEntries(eventTypeCounts),
           error: err.message,
         },
       )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/text.ts around line 129, the
Map<string, number> eventTypeCounts is declared but never populated or used;
either implement event counting where events are processed (increment counts for
each event.type before logging/processing) so the map reflects actual counts, or
remove the declaration and any references to it (including its interpolation in
the error log at line 386). Also update the catch block at/around line 386 to
stop referencing eventTypeCounts (use a fallback message or include only data
actually available) or ensure it reads from the populated map; keep the logging
consistent and avoid accessing an empty/uninitialized structure.

Comment on lines +566 to +571
// For base64 data, construct a data URI
return {
type: 'input_image',
image_url: part.source.value,
detail: imageMetadata?.detail || 'auto',
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Search for base64 source creation patterns
rg -n "type.*base64" --type ts -B5 -A5

# Look for ContentPart or image source definitions
rg -n "source.*value" packages/typescript/ai-openai/src --type ts -B3 -A3

# Find where this image_url is being used and how sources are constructed
fd -e ts packages/typescript/ai-openai/src | head -20

Repository: TanStack/ai

Length of output: 8948


🏁 Script executed:

# Find ContentPart type definition
rg -n "type.*data.*source" packages/typescript --type ts -B2 -A2

# Search for where sources are created with type 'data'
rg -n "source.*type.*data\|type.*data" packages/typescript --type ts -B3 -A3 | head -100

# Look for the ContentPart interface/type definition
rg -n "ContentPart\|ImageSource" packages/typescript --type ts | grep -E "(type|interface) " | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search more broadly for ContentPart definition
rg -n "ContentPart" packages/typescript --type ts | head -20

# Look for type definitions in core packages
find packages -name "*.ts" -type f | xargs grep -l "type ContentPart\|interface ContentPart" 2>/dev/null

# Search for how image sources with 'data' type are created
rg -n "source.*type.*:.*['\"]data['\"]" packages/typescript --type ts -B3 -A3 | head -80

Repository: TanStack/ai

Length of output: 1898


🏁 Script executed:

# Read the ContentPartSource definition
cat -n packages/typescript/ai/src/types.ts | sed -n '95,120p'

# Also read the full context around ContentPart definition
cat -n packages/typescript/ai/src/types.ts | sed -n '95,175p'

# Look at message-converters.ts
cat -n packages/typescript/ai/src/message-converters.ts | head -100

Repository: TanStack/ai

Length of output: 7311


🏁 Script executed:

# Check image metadata type to see if media_type is available
rg -n "ImageMetadata\|imageMetadata" packages/typescript/ai-openai/src/adapters/text.ts -B5 -A5 | head -80

# Look at the full image handling function
cat -n packages/typescript/ai-openai/src/adapters/text.ts | sed -n '550,590p'

# Check OpenAI package to see if there are any hints about data URI format
rg -n "data:image\|base64" packages/typescript/ai-openai/src --type ts -B2 -A2

Repository: TanStack/ai

Length of output: 1466


🏁 Script executed:

# Check what metadata types are available for images
rg -n "OpenAIImageMetadata\|imageMetadata" packages/typescript/ai-openai/src --type ts -B3 -A3

# Look for type definitions of image metadata in OpenAI
rg -n "ImageMetadata\|image.*metadata" packages/typescript/ai-openai/src/types.ts --type ts -B2 -A2

# Check if there's any type definition file
find packages/typescript/ai-openai/src -name "*types*" -o -name "*type*" | head -10

Repository: TanStack/ai

Length of output: 205


🏁 Script executed:

# Read the message types file
cat -n packages/typescript/ai-openai/src/message-types.ts | head -200

# Also search for any media type references
rg -n "media.*type\|mediaType" packages/typescript/ai-openai/src --type ts -B2 -A2

Repository: TanStack/ai

Length of output: 2551


🏁 Script executed:

# Search for test cases related to image handling
find packages/typescript/ai-openai -name "*.test.ts" -o -name "*.spec.ts" | xargs grep -l "image\|base64" 2>/dev/null

# Look for any usage examples in the codebase
rg -n "input_image" packages/typescript --type ts -B5 -A5

# Check if there are integration tests or examples
find . -name "*example*" -o -name "*test*" | grep -i openai | head -10

Repository: TanStack/ai

Length of output: 6555


🏁 Script executed:

# Read the test file to see how images are handled
cat packages/typescript/ai-openai/tests/openai-adapter.test.ts | head -300

# Also check for any base64 test data
rg -n "base64\|data:image" packages/typescript/ai-openai/tests --type ts -B3 -A3

Repository: TanStack/ai

Length of output: 3633


🏁 Script executed:

# Search for any live tests or examples with images
find packages/typescript/ai-openai/live-tests -name "*.ts" | xargs cat 2>/dev/null | head -200

# Look at what OpenAI Responses API expects for images
rg -n "ResponseInputContent\|input_image" packages/typescript/ai-openai/src --type ts -B2 -A5 | head -150

# Search for any conversion or handling logic that might give hints about the format
rg -n "image_url\|file_data\|base64" packages/typescript/ai-openai/src/adapters/text.ts --type ts -B3 -A3

Repository: TanStack/ai

Length of output: 5895


🏁 Script executed:

# Search for data URI construction or similar patterns
rg -n "data:image\|data:audio\|data:video" packages/typescript --type ts

# Check how other adapters handle base64 data
rg -n "type.*base64" packages/typescript/ai-anthropic/src/adapters/text.ts -B5 -A10

# Look for any media type metadata that might be passed
rg -n "mediaType\|media_type" packages/typescript --type ts -B2 -A2 | grep -E "(image|audio|video)" -A2 -B2

Repository: TanStack/ai

Length of output: 1293


🏁 Script executed:

# Check the OpenAI SDK import and what types it uses
rg -n "import.*OpenAI\|from.*openai" packages/typescript/ai-openai/src --type ts | head -10

# Look at the response types being used
rg -n "Responses\|ResponseInputContent" packages/typescript/ai-openai/src --type ts -B2 -A2 | head -80

# Check if there's any documentation or comments about image format
rg -n "image_url\|ResponseInputImage" packages/typescript/ai-openai/src --type ts -B5 -A5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Read the imports from openai-adapter.ts
head -50 packages/typescript/ai-openai/src/openai-adapter.ts

# Also check adapters/text.ts imports
head -50 packages/typescript/ai-openai/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 2890


🏁 Script executed:

# Search for any OpenAI API documentation references or links
rg -n "platform.openai.com\|docs.*vision\|image.*format" packages/typescript/ai-openai/src --type ts -B2 -A2

# Check the actual OpenAI SDK package.json to understand version
cat packages/typescript/ai-openai/package.json | grep -A5 -B5 '"openai"'

# Look for any comments or documentation about the image format requirement
rg -n "image_url.*format\|base64.*data.*uri\|data.*uri.*image" packages/typescript --type ts -i

Repository: TanStack/ai

Length of output: 383


🏁 Script executed:

# Search for how image metadata is created/populated in the framework
rg -n "ImagePart\|image.*metadata\|mediaType" packages/typescript/ai/src --type ts -B3 -A3 | head -150

# Check if there's any way media type is determined
rg -n "image/\|media.*type\|jpeg\|png\|webp" packages/typescript/ai/src --type ts -B2 -A2

Repository: TanStack/ai

Length of output: 37


Base64 images need data URI format for OpenAI API.

The code passes part.source.value directly to image_url, but for source.type === 'data', this is raw base64 (per the ContentPartSource type definition). OpenAI's Responses API expects image_url to be a URL string or data URI like data:image/jpeg;base64,..., not raw base64. Without media type metadata available in OpenAIImageMetadata, construct a data URI with a sensible default format (e.g., data:image/jpeg;base64,${part.source.value}).

🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/text.ts around lines 566 to 571,
the code returns raw base64 in image_url when part.source.type === 'data' but
OpenAI expects a URL or a data URI; detect when source.type === 'data' and, if
so, prepend a sensible default media type to form a data URI (e.g.,
"data:image/jpeg;base64," + part.source.value) unless imageMetadata contains an
explicit media type to use; leave non-data sources unchanged and ensure the
final returned object uses that constructed data URI for image_url.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
examples/ts-react-chat/src/routes/api.tanchat.ts (1)

2-8: Remove top-level test = ai(openaiImage(...)) and associated imports

This const test = ai({ adapter: openaiImage(), ... }) is:

  • Unused in the route logic, and
  • For image adapters, side‑effectful: the ai() implementation immediately calls adapter.generateImages, so this will generate images at module load time for every server start / hot reload.

A very similar unused test block was flagged previously; this is effectively the same issue with a valid model instead of 'gpt-5'.

You can safely drop it and the now‑unused imports (ai, openaiImage, openaiSummarize, openaiText, z), e.g.:

-import { ai, chat, maxIterations, toStreamResponse } from '@tanstack/ai'
-import {
-  openai,
-  openaiImage,
-  openaiSummarize,
-  openaiText,
-} from '@tanstack/ai-openai'
+import { chat, maxIterations, toStreamResponse } from '@tanstack/ai'
+import { openai } from '@tanstack/ai-openai'
@@
-import z from 'zod'
@@
-const test = ai({
-  adapter: openaiImage(),
-  model: 'gpt-image-1',
-  prompt:
-    'A cute baby sea otter wearing a beret and glasses, sitting at a small cafe table, sipping a cappuccino, with a croissant on a plate. The background shows a picturesque Parisian street with the Eiffel Tower in the distance. The scene is bathed in the warm, golden light of a late afternoon.',
-  numberOfImages: 2,
-  size: 'auto',
-  providerOptions: {
-    quality: 'high',
-  },
-})
-

That keeps the example route focused on the chat flow without unexpected image API calls.

Also applies to: 12-12, 21-31

🧹 Nitpick comments (9)
packages/typescript/ai-gemini/src/model-meta.ts (1)

223-253: Gemini image model metadata and exports look consistent

The new image model entries (GEMINI_2_5_FLASH_IMAGE, GEMINI_2_FLASH_IMAGE, IMAGEN_* variants) and the GEMINI_IMAGE_MODELS / GeminiImageModels exports line up correctly and give the image adapter a clean, typed model surface. Only nit: you might consider moving GeminiImageModels below GEMINI_IMAGE_MODELS to avoid the forward reference and mirror the pattern used for GeminiModels, but that’s purely cosmetic.

Also applies to: 421-450, 517-607, 782-793

packages/typescript/ai/src/core/generate.ts (1)

24-281: Unified ai() routing and typing look correct

The GenerateAdapter/AnyGenerateAdapter unions, GenerateOptionsFor/GenerateReturnType, and the ai() overload cleanly discriminate on adapter.kind and produce the expected API shapes:

  • chat → stream or structured Promise based on outputSchema
  • embedding → Promise<EmbeddingResult>
  • summarize → streaming vs non‑streaming via stream
  • image → Promise<ImageGenerationResult>

The model/providerOptions extractor helpers (ChatProviderOptionsForModel, ImageProviderOptionsForModel, etc.) also give nice per‑model typing without leaking index‑signature maps. Aside from the summarize providerOptions wiring noted separately, this core looks well‑structured.

Also applies to: 365-400, 472-598, 664-677, 685-704

packages/typescript/ai-gemini/tests/image-adapter.test.ts (1)

115-160: Relax generateImages call expectation to tolerate extra config fields

The generateImages test asserts:

expect(mockGenerateImages).toHaveBeenCalledWith({
  model: 'imagen-3.0-generate-002',
  prompt: 'A cat wearing a hat',
  config: {
    numberOfImages: 1,
    aspectRatio: '1:1',
  },
})

But the adapter’s buildConfig currently also includes imageSize (and may gain more fields over time). With strict toHaveBeenCalledWith, any additional keys on config will cause this test to fail even though the call is still correct.

Consider making this more robust, e.g.:

expect(mockGenerateImages).toHaveBeenCalledWith({
  model: 'imagen-3.0-generate-002',
  prompt: 'A cat wearing a hat',
  config: expect.objectContaining({
    numberOfImages: 1,
    aspectRatio: '1:1',
  }),
})

This keeps the important contract assertions while allowing harmless additions to the request config.

packages/typescript/ai-openai/tests/image-adapter.test.ts (1)

135-187: Make generateImages request expectations less brittle

The tests currently assert the full images.generate call shape:

expect(mockGenerate).toHaveBeenCalledWith({
  model: 'gpt-image-1',
  prompt: 'A cat wearing a hat',
  n: 1,
  size: '1024x1024',
  stream: false,
})

This will start failing as soon as buildRequest adds legitimate fields (e.g., response_format, quality/style options).

You can keep the important guarantees while future‑proofing the test by loosening it slightly:

expect(mockGenerate).toHaveBeenCalledWith(
  expect.objectContaining({
    model: 'gpt-image-1',
    prompt: 'A cat wearing a hat',
    n: 1,
    size: '1024x1024',
    stream: false,
  }),
)

Same idea applies to the second test that only checks ID generation; the call shape doesn’t need to be pinned exactly there either.

Also applies to: 189-218

packages/typescript/ai-gemini/src/adapters/image.ts (1)

90-121: Minor cleanup in config/result shaping

Two small nits you might consider:

  • In buildConfig, imageSize: size will serialize away when size is undefined, so it’s harmless but could be omitted entirely when not set for slightly cleaner requests.
  • In transformResponse, you don’t need to explicitly set usage: undefined; omitting the property keeps the result shape cleaner and avoids consumers checking for an explicitly undefined field.

Example tweak:

  private buildConfig(
    options: ImageGenerationOptions<GeminiImageProviderOptions>,
  ): GenerateImagesConfig {
    const { size, numberOfImages, providerOptions } = options

-   return {
-     numberOfImages: numberOfImages ?? 1,
-     imageSize: size,
-     aspectRatio: size ? sizeToAspectRatio(size) : undefined,
-     ...providerOptions,
-   }
+   return {
+     numberOfImages: numberOfImages ?? 1,
+     ...(size && { imageSize: size, aspectRatio: sizeToAspectRatio(size) }),
+     ...providerOptions,
+   }
  }

  private transformResponse(
    model: string,
    response: GenerateImagesResponse,
  ): ImageGenerationResult {
    const images: Array<GeneratedImage> = (response.generatedImages ?? []).map(
      (item) => ({
        b64Json: item.image?.imageBytes,
        revisedPrompt: item.enhancedPrompt,
      }),
    )

    return {
      id: generateId(this.name),
      model,
      images,
-     usage: undefined,
    }
  }
packages/typescript/ai-openai/src/adapters/image.ts (1)

59-92: Double‑check size: 'auto' semantics across models

Right now generateImages relies on validateImageSize, which short‑circuits for size === 'auto', so 'auto' is effectively allowed for any model. If DALL·E 2/3 do not support 'auto' for size, these calls will only fail at the OpenAI API level instead of being caught by your validator.

A tighter approach (if the API requires it) would be to only treat 'auto' as special for the GPT‑Image models and run validation (and/or throw) for 'auto' on DALL·E:

export function validateImageSize(
  model: string,
  size: string | undefined,
): void {
-  if (!size || size === 'auto') return
+  if (!size) return
+
+  const isGptImageModel =
+    model === 'gpt-image-1' || model === 'gpt-image-1-mini'
+  if (size === 'auto' && isGptImageModel) return

Please verify this against the current OpenAI image docs before changing behavior.

packages/typescript/ai-openai/src/image/image-provider-options.ts (2)

213-237: Clarify validateImageSize behavior for 'auto' and per‑model sizes

The validator short‑circuits for size === 'auto', which means 'auto' is effectively accepted for all models even though only the GPT‑Image models include 'auto' in their documented size sets. If DALL·E 2/3 don’t support 'auto', this defeats the purpose of early validation.

Consider:

  • Restricting the 'auto' fast‑path to GPT‑Image models only, and
  • Potentially using OpenAIImageModelSizeByName (or a typed helper) instead of a raw Record<string, Array<string>> to keep the table aligned with the type aliases.

This keeps runtime validation in sync with the declared capabilities.


264-297: Optional: enforce documented background/output_format constraint

The JSDoc for background says that when 'transparent', output_format must be 'png' or 'webp', but validateBackground currently only guards that background is used with GPT‑Image models, not that the output format is compatible.

If you want runtime parity with the docs (especially for JS consumers or any usage), consider extending the validation input to include output_format and adding a simple check:

-interface ImageValidationOptions {
-  prompt: string
-  model: string
-  background?: 'transparent' | 'opaque' | 'auto' | null
-}
+interface ImageValidationOptions {
+  prompt: string
+  model: string
+  background?: 'transparent' | 'opaque' | 'auto' | null
+  output_format?: GptImageOutputFormat
+}

export const validateBackground = (options: ImageValidationOptions) => {
  if (options.background) {
    const supportedModels = ['gpt-image-1', 'gpt-image-1-mini']
    if (!supportedModels.includes(options.model)) {
      throw new Error(
        `The model ${options.model} does not support background option.`,
      )
    }
+
+    if (
+      options.background === 'transparent' &&
+      options.output_format &&
+      options.output_format !== 'png' &&
+      options.output_format !== 'webp'
+    ) {
+      throw new Error(
+        `Transparent background requires output_format 'png' or 'webp', got '${options.output_format}'.`,
+      )
+    }
  }
}
packages/typescript/ai/src/types.ts (1)

1044-1093: ChatOptionsForChatAdapter mirrors ChatStreamOptionsForModel; minor Omit nit

The ChatOptionsForChatAdapter helper matches the structure of ChatStreamOptionsForModel, constraining providerOptions by model and messages by model input modalities and metadata, which is exactly what you want for the new ChatAdapter abstraction.

One tiny nit: the Omit<ChatOptions, 'model' | 'providerOptions' | 'responseFormat' | 'messages'> still refers to 'responseFormat', while the current options use output?: ResponseFormat<...>. This is harmless (the extra key is ignored in Omit), but you could replace 'responseFormat' with 'output' for accuracy:

-    ? Omit<
-        ChatOptions,
-        'model' | 'providerOptions' | 'responseFormat' | 'messages'
-      > & {
+    ? Omit<ChatOptions, 'model' | 'providerOptions' | 'output' | 'messages'> & {
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86fa305 and 5c9703c.

📒 Files selected for processing (18)
  • README.md (1 hunks)
  • docs/guides/image-generation.md (1 hunks)
  • examples/ts-react-chat/src/routes/api.tanchat.ts (2 hunks)
  • packages/typescript/ai-gemini/src/adapters/image.ts (1 hunks)
  • packages/typescript/ai-gemini/src/image/image-provider-options.ts (1 hunks)
  • packages/typescript/ai-gemini/src/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/model-meta.ts (10 hunks)
  • packages/typescript/ai-gemini/tests/image-adapter.test.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/image.ts (1 hunks)
  • packages/typescript/ai-openai/src/image/image-provider-options.ts (2 hunks)
  • packages/typescript/ai-openai/src/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/model-meta.ts (5 hunks)
  • packages/typescript/ai-openai/tests/image-adapter.test.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-image-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/index.ts (1 hunks)
  • packages/typescript/ai/src/core/generate.ts (1 hunks)
  • packages/typescript/ai/src/index.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/guides/image-generation.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • README.md
  • packages/typescript/ai/src/adapters/index.ts
🧰 Additional context used
🧬 Code graph analysis (5)
packages/typescript/ai-openai/tests/image-adapter.test.ts (3)
packages/typescript/ai-openai/src/adapters/image.ts (2)
  • createOpenaiImage (137-142)
  • OpenAIImageAdapter (42-117)
packages/typescript/ai-openai/src/index.ts (2)
  • createOpenaiImage (35-35)
  • OpenAIImageAdapter (34-34)
packages/typescript/ai-openai/src/image/image-provider-options.ts (3)
  • validateImageSize (213-237)
  • validateNumberOfImages (242-262)
  • validatePrompt (275-297)
packages/typescript/ai-gemini/tests/image-adapter.test.ts (3)
packages/typescript/ai-gemini/src/adapters/image.ts (2)
  • createGeminiImage (142-147)
  • GeminiImageAdapter (48-122)
packages/typescript/ai-gemini/src/index.ts (2)
  • createGeminiImage (39-39)
  • GeminiImageAdapter (38-38)
packages/typescript/ai-gemini/src/image/image-provider-options.ts (4)
  • sizeToAspectRatio (180-185)
  • validateImageSize (191-206)
  • validateNumberOfImages (212-226)
  • validatePrompt (231-239)
packages/typescript/ai-openai/src/image/image-provider-options.ts (1)
packages/typescript/ai-openai/src/index.ts (2)
  • OpenAIImageProviderOptions (40-40)
  • OpenAIImageModelProviderOptionsByName (41-41)
examples/ts-react-chat/src/routes/api.tanchat.ts (4)
packages/typescript/ai/src/core/generate.ts (1)
  • ai (375-466)
packages/typescript/ai/src/index.ts (1)
  • ai (5-5)
packages/typescript/ai-openai/src/adapters/image.ts (1)
  • openaiImage (167-172)
packages/typescript/ai-openai/src/index.ts (1)
  • openaiImage (36-36)
packages/typescript/ai-openai/src/adapters/image.ts (5)
packages/typescript/ai/src/adapters/index.ts (1)
  • BaseImageAdapter (29-29)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_IMAGE_MODELS (1656-1661)
packages/typescript/ai-openai/src/image/image-provider-options.ts (6)
  • OpenAIImageProviderOptions (173-177)
  • OpenAIImageModelProviderOptionsByName (183-188)
  • OpenAIImageModelSizeByName (193-198)
  • validatePrompt (275-297)
  • validateImageSize (213-237)
  • validateNumberOfImages (242-262)
packages/typescript/ai/src/types.ts (3)
  • ImageGenerationOptions (746-759)
  • ImageGenerationResult (776-789)
  • GeneratedImage (764-771)
packages/typescript/ai/src/adapters/base-image-adapter.ts (1)
  • generateId (88-90)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (15)
packages/typescript/ai/src/adapters/base-image-adapter.ts (1)

1-91: BaseImageAdapter contract and generics look solid

The shared ImageAdapter interface and BaseImageAdapter implementation are well-shaped for the OpenAI/Gemini adapters (models, providerOptions, size maps, config, and kind: 'image'). The type-only declare fields are a good pattern for inference without runtime cost; no issues spotted.

packages/typescript/ai/src/index.ts (1)

4-13: Re-exporting ai and generate option types from the root is appropriate

The new export block cleanly exposes ai, AnyAdapter, GenerateAdapter, GenerateOptions, and the per-kind option aliases from ./core/generate without disturbing existing exports. This is the right place for the new unified entrypoint.

packages/typescript/ai-openai/src/model-meta.ts (1)

1250-1290: Image model metadata and OPENAI_IMAGE_MODELS export are aligned

Defining DALL_E_2 and DALL_E_3 as proper ModelMeta entries and exporting OPENAI_IMAGE_MODELS with GPT_IMAGE_1, GPT_IMAGE_1_MINI, dall-e-3, and dall-e-2 matches the adapter/tests expectations and cleanly separates image models from chat/embedding lists. No issues spotted here.

Also applies to: 1656-1661

packages/typescript/ai-gemini/src/adapters/image.ts (2)

31-60: Adapter generics and type integration look correct

GeminiImageAdapter correctly threads GEMINI_IMAGE_MODELS, GeminiImageProviderOptions, and the model→options/size maps into BaseImageAdapter, and the declared _modelProviderOptionsByName / _modelSizeByName fields give good type narrowing for consumers.


124-177: Factory helpers are consistent and ergonomic

createGeminiImage and geminiImage mirror the other adapters’ factory patterns (explicit key vs env lookup) and correctly compose GeminiImageConfig by omitting apiKey from the external config surface.

packages/typescript/ai-openai/src/adapters/image.ts (3)

31-58: Adapter wiring and client construction look solid

OpenAIImageAdapter cleanly wires OPENAI_IMAGE_MODELS, the provider options types, and the createOpenAIClient-based client. Class shape matches the other tree‑shakeable adapters, which keeps the surface consistent.


94-116: Response mapping matches the OpenAI images response shape

Mapping data entries to GeneratedImage (b64, url, revisedPrompt) and normalizing response.usage into the generic {inputTokens, outputTokens, totalTokens} shape looks correct and matches the rest of the SDK’s conventions.


119-172: Factory functions follow the established OpenAI adapter pattern

createOpenaiImage and openaiImage mirror the text/embed/summarize factories (explicit key vs env lookup, Omit<...,'apiKey'>), which should make migration straightforward for users.

packages/typescript/ai-openai/src/index.ts (1)

5-75: Export surface for tree‑shakeable adapters and legacy adapter looks well‑structured

The index cleanly groups the new text/embed/summarize/image adapters with their configs and provider options, and clearly marks the monolithic OpenAI export as deprecated. This should make the migration path obvious without breaking existing users.

packages/typescript/ai-gemini/src/index.ts (1)

5-91: Gemini index exports align well with the new adapter architecture

The file cleanly exposes the new text/embed/summarize/image adapters and model aliases, while keeping the monolithic GeminiAdapter under a clearly documented deprecated section. The symmetry with the OpenAI index is helpful for users switching providers.

packages/typescript/ai/src/types.ts (3)

738-789: Image generation core types match adapter usage

ImageGenerationOptions, GeneratedImage, and ImageGenerationResult line up with how the OpenAI and Gemini image adapters use them (model/prompt/numberOfImages/size/providerOptions in, images + optional usage out). Centralizing these here should make it easy to add other providers without duplicating shapes.


1016-1043: Adapter model‑extraction helpers are consistent with existing patterns

ExtractModelsFromChatAdapter, ExtractModelsFromEmbeddingAdapter, and ExtractModelsFromSummarizeAdapter (plus ExtractModelsFromAnyAdapter) mirror the existing ExtractModelsFromAdapter style and correctly pull M[number] out of each adapter’s first generic. This will be handy for strongly‑typed higher‑level helpers that accept any of the new adapter types.


1092-1093: Re‑exporting adapter types from ./adapters is useful for consumers

The final export type { ChatAdapter, EmbeddingAdapter, SummarizeAdapter, AnyAdapter } makes these new abstractions discoverable from the central @tanstack/ai types module, which is convenient for library users.

packages/typescript/ai-gemini/src/image/image-provider-options.ts (2)

29-153: Gemini image provider options and model maps look well‑designed

GeminiImageProviderOptions mirrors the GenerateImagesConfig surface (aspect ratio, safety, watermarking, compression, etc.), and the model→options/size maps (GeminiImageModelProviderOptionsByName, GeminiImageModelSizeByName) give you nice type‑level narrowing for adapters without over‑committing to specific models in this file.


191-239: Size, count, and prompt validators are reasonable defaults

validateImageSize’s use of sizeToAspectRatio with a clear error message, validateNumberOfImages clamping to 1–4, and validatePrompt’s trimmed non‑empty check give good early feedback before hitting the Gemini API. The comments about model‑specific max‑images provide context for the conservative maxImages = 4 choice.

Comment on lines +155 to +185
/**
* Valid sizes for Gemini Imagen models
* Gemini uses aspect ratios, but we map common WIDTHxHEIGHT formats to aspect ratios
* These are approximate mappings based on common image dimensions
*/
export const GEMINI_SIZE_TO_ASPECT_RATIO: Record<string, GeminiAspectRatio> = {
// Square
'1024x1024': '1:1',
'512x512': '1:1',
// Landscape
'1024x768': '4:3',
'1536x1024': '3:4', // Actually this is portrait, but matching common dimensions
'1792x1024': '16:9',
'1920x1080': '16:9',
// Portrait
'768x1024': '3:4',
'1024x1536': '4:3', // Inverted
'1024x1792': '9:16',
'1080x1920': '9:16',
}

/**
* Maps a WIDTHxHEIGHT size string to a Gemini aspect ratio
* Returns undefined if the size cannot be mapped
*/
export function sizeToAspectRatio(
size: string | undefined,
): GeminiAspectRatio | undefined {
if (!size) return undefined
return GEMINI_SIZE_TO_ASPECT_RATIO[size]
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix swapped aspect‑ratio mappings for 1536x1024 and 1024x1536

In GEMINI_SIZE_TO_ASPECT_RATIO, the mappings for 1536x1024 and 1024x1536 appear inverted:

  • 1536x1024 is a landscape resolution, but is currently mapped to '3:4' (portrait).
  • 1024x1536 is a portrait resolution, but is mapped to '4:3' (landscape).

This inversion will cause sizeToAspectRatio (and thus validateImageSize/buildConfig) to feed a mismatched aspect ratio to the API for those sizes.

A straightforward fix is to swap them:

  // Landscape
   '1024x768': '4:3',
-  '1536x1024': '3:4', // Actually this is portrait, but matching common dimensions
+  '1536x1024': '4:3',
   '1792x1024': '16:9',
   '1920x1080': '16:9',
  // Portrait
   '768x1024': '3:4',
-  '1024x1536': '4:3', // Inverted
+  '1024x1536': '3:4',

You may also want to adjust the comments to reflect the corrected orientation.

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/image/image-provider-options.ts around
lines 155 to 185, the aspect-ratio mappings for '1536x1024' and '1024x1536' are
inverted (landscape mapped as portrait and vice versa); swap the values so
'1536x1024' maps to a landscape ratio ('4:3') and '1024x1536' maps to the
portrait ratio ('3:4'), and update the inline comments to reflect the correct
orientation for each entry.

Comment on lines 185 to 225
/** Options for summarize generation */
export interface GenerateSummarizeOptions<
TAdapter extends SummarizeAdapter<ReadonlyArray<string>, object>,
TModel extends SummarizeModels<TAdapter>,
TStream extends boolean = false,
> extends GenerateBaseOptions<TAdapter & { kind: 'summarize' }, TModel> {
text: string
maxLength?: number
style?: 'bullet-points' | 'paragraph' | 'concise'
focus?: Array<string>
providerOptions?: SummarizeProviderOptions<TAdapter>
/**
* Whether to stream the summarization result.
* When true, returns an AsyncIterable<StreamChunk> for streaming output.
* When false or not provided, returns a Promise<SummarizationResult>.
*
* @default false
*
* @example
* ```ts
* // Non-streaming (default)
* const result = await ai({
* adapter: summarizeAdapter,
* model: 'summarize-v1',
* text: 'Long text to summarize...'
* })
* console.log(result.summary)
*
* // Streaming
* for await (const chunk of ai({
* adapter: summarizeAdapter,
* model: 'summarize-v1',
* text: 'Long text to summarize...',
* stream: true
* })) {
* console.log(chunk)
* }
* ```
*/
stream?: TStream
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Summarize providerOptions are defined but never forwarded to adapters

GenerateSummarizeOptions exposes providerOptions?: SummarizeProviderOptions<TAdapter>, but both generateSummary and generateSummaryStream drop it when building SummarizationOptions. That means any provider‑specific summarize options passed to ai({...}) are ignored.

You likely want to plumb these through, e.g.:

 async function generateSummary(
   options: GenerateSummarizeOptions<
     SummarizeAdapter<ReadonlyArray<string>, object>,
     string,
     boolean
   >,
 ): Promise<SummarizationResult> {
-  const { adapter, model, text, maxLength, style, focus } = options
+  const { adapter, model, text, maxLength, style, focus, providerOptions } =
+    options
 
   const summarizeOptions: SummarizationOptions = {
     model,
     text,
     maxLength,
     style,
     focus,
+    providerOptions,
   }
 
   return adapter.summarize(summarizeOptions)
 }
 
 async function* generateSummaryStream(
   options: GenerateSummarizeOptions<
     SummarizeAdapter<ReadonlyArray<string>, object>,
     string,
     true
   >,
 ): AsyncIterable<StreamChunk> {
-  const { adapter, model, text, maxLength, style, focus } = options
+  const { adapter, model, text, maxLength, style, focus, providerOptions } =
+    options
 
   const summarizeOptions: SummarizationOptions = {
     model,
     text,
     maxLength,
     style,
     focus,
+    providerOptions,
   }

Without this, summarize adapters can’t receive their configured provider options through the new ai() entrypoint.

Also applies to: 599-617, 623-641

🤖 Prompt for AI Agents
In packages/typescript/ai/src/core/generate.ts around lines 185 to 225 (and also
apply the same fix at the other places noted: ~599-617 and ~623-641), the
GenerateSummarizeOptions includes providerOptions but those options are not
forwarded into the SummarizationOptions passed to adapters, so provider-specific
settings are dropped; update the code that builds the SummarizationOptions in
both generateSummary and generateSummaryStream to include the incoming
options.providerOptions (typed as SummarizeProviderOptions<TAdapter>) and pass
it through to the adapter calls, preserving the generic types and ensuring the
property name matches the adapter API so summarize adapters receive their
providerOptions.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (11)
packages/typescript/ai-gemini/src/adapters/text.ts (3)

325-332: Incorrect token count mapping—use candidatesTokenCount instead of thoughtsTokenCount.

This issue was flagged in a previous review and remains unresolved. thoughtsTokenCount represents internal reasoning tokens (chain-of-thought), not completion tokens. Use candidatesTokenCount for actual generated response tokens.

Apply this diff:

          usage: chunk.usageMetadata
            ? {
                promptTokens: chunk.usageMetadata.promptTokenCount ?? 0,
-               completionTokens: chunk.usageMetadata.thoughtsTokenCount ?? 0,
+               completionTokens: chunk.usageMetadata.candidatesTokenCount ?? 0,
                totalTokens: chunk.usageMetadata.totalTokenCount ?? 0,
              }
            : undefined,

341-365: Incorrect default MIME type for non-image content.

This issue was flagged in a previous review and remains unresolved. The fallback 'image/jpeg' is applied to audio, video, and document types, which will cause processing failures for non-image content.

Apply type-appropriate defaults for each media type:

      case 'image':
+     case 'audio':
+     case 'video':
+     case 'document': {
-     case 'audio':
-     case 'video':
-     case 'document': {
-       const metadata = part.metadata as
-         | GeminiDocumentMetadata
-         | GeminiImageMetadata
-         | GeminiVideoMetadata
-         | GeminiAudioMetadata
-         | undefined
+       let defaultMimeType: string
+       let metadata:
+         | GeminiDocumentMetadata
+         | GeminiImageMetadata
+         | GeminiVideoMetadata
+         | GeminiAudioMetadata
+         | undefined
+
+       switch (part.type) {
+         case 'image':
+           defaultMimeType = 'image/jpeg'
+           metadata = part.metadata as GeminiImageMetadata | undefined
+           break
+         case 'audio':
+           defaultMimeType = 'audio/mp3'
+           metadata = part.metadata as GeminiAudioMetadata | undefined
+           break
+         case 'video':
+           defaultMimeType = 'video/mp4'
+           metadata = part.metadata as GeminiVideoMetadata | undefined
+           break
+         case 'document':
+           defaultMimeType = 'application/pdf'
+           metadata = part.metadata as GeminiDocumentMetadata | undefined
+           break
+       }
+
        if (part.source.type === 'data') {
          return {
            inlineData: {
              data: part.source.value,
-             mimeType: metadata?.mimeType ?? 'image/jpeg',
+             mimeType: metadata?.mimeType ?? defaultMimeType,
            },
          }
        } else {
          return {
            fileData: {
              fileUri: part.source.value,
-             mimeType: metadata?.mimeType ?? 'image/jpeg',
+             mimeType: metadata?.mimeType ?? defaultMimeType,
            },
          }
        }
      }

417-426: Use function name instead of toolCallId for functionResponse.name.

This issue was flagged in a previous review and remains unresolved. Per the Gemini API specification, functionResponse.name must be the function/tool identifier (the declaration name), not a call ID.

The ModelMessage type for tool messages needs to include the function name. Assuming the message structure includes it (e.g., as msg.toolName or similar), apply this diff:

      if (msg.role === 'tool' && msg.toolCallId) {
        parts.push({
          functionResponse: {
-           name: msg.toolCallId,
+           name: msg.toolName, // or the appropriate property containing the function name
            response: {
              content: msg.content || '',
            },
          },
        })
      }

If the tool message doesn't currently include the function name, the message type definition must be updated to include it.

packages/typescript/ai/src/core/generate.ts (1)

542-560: Summarize providerOptions not forwarded - previously flagged.

This issue was already identified in a previous review. The providerOptions from GenerateSummarizeOptions is not being passed through to SummarizationOptions in both generateSummary and generateSummaryStream.

packages/typescript/ai-anthropic/src/adapters/text.ts (4)

109-121: Error code may produce string "undefined" - previously flagged.

This issue was already identified in a previous review. When both err.code and err.status are undefined, String(err.status) produces "undefined".


230-239: Spread of validProviderOptions can override systemPrompts - previously flagged.

This issue was already identified in a previous review. The 'system' key is in validKeys, so spreading validProviderOptions after setting system from systemPrompts can override it.


473-491: content_block_stop handler doesn't use event.index - previously flagged.

This issue was already identified in a previous review. The handler checks toolCallsMap.get(currentToolIndex) without verifying the stopped block is actually a tool_use block using event.index.


554-567: Duplicate error code issue - previously flagged.

This issue was already identified in a previous review. Same pattern as line 118 where err.code || String(err.status) can produce "undefined".

packages/typescript/ai-ollama/src/adapters/text.ts (1)

190-213: Ollama ToolCall lacks id property - previously flagged.

This issue was already identified in a previous review. Ollama v0.6.3's ToolCall interface doesn't include id or function.index properties. The type assertion masks this, but actualToolCall.id and actualToolCall.function.index will be undefined at runtime.

packages/typescript/ai-openai/src/adapters/text.ts (2)

210-210: Unused eventTypeCounts variable.

The eventTypeCounts map is declared but never populated. It's referenced in the error logging at line 467 but will always be empty. Either implement the counting logic or remove this variable and its reference.


647-652: Base64 images still need data URI format.

The comment at line 647 states "For base64 data, construct a data URI" but the code returns part.source.value directly without the data URI prefix. OpenAI's API expects image_url to be a data URI like data:image/jpeg;base64,... for base64 data.

       // For base64 data, construct a data URI
+      const mediaType = part.metadata?.mediaType || 'image/jpeg'
       return {
         type: 'input_image',
-        image_url: part.source.value,
+        image_url: `data:${mediaType};base64,${part.source.value}`,
         detail: imageMetadata?.detail || 'auto',
       }
🧹 Nitpick comments (5)
packages/typescript/ai/src/core/generate.ts (2)

283-300: Missing newlines before type/comment blocks.

Lines 283 and 300 are missing newlines before the comment/type definition, which reduces readability. The never type ends directly before the next type definition without separation.

-        : never /** Infer the return type based on adapter kind, schema, and stream */
-type GenerateReturnType<
+        : never
+
+/** Infer the return type based on adapter kind, schema, and stream */
+type GenerateReturnType<

And similarly at line 300:

-        : never /**
+        : never
+
+/**

481-482: Avoid as any cast - loses type safety.

The chat(options as any) cast bypasses TypeScript's type checking entirely. Consider defining a more specific type or creating a proper mapping function.

 async function* generateChat(
   options: GenerateChatOptions<
     ChatAdapter<ReadonlyArray<string>, object, any, any, any>,
     string,
     z.ZodType | undefined
   >,
 ): AsyncIterable<StreamChunk> {
-  yield* chat(options as any)
+  yield* options.adapter.chatStream({
+    model: options.model,
+    messages: options.messages,
+    systemPrompts: options.systemPrompts,
+    tools: options.tools,
+    options: options.options,
+    providerOptions: options.providerOptions,
+    abortController: options.abortController,
+    agentLoopStrategy: options.agentLoopStrategy,
+    conversationId: options.conversationId,
+  })
 }
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)

159-161: Consider using crypto.randomUUID() for more robust ID generation.

Math.random() has limited entropy and can produce collisions in high-volume scenarios. For a base adapter that may be used across many requests, crypto.randomUUID() provides better uniqueness guarantees.

   protected generateId(): string {
-    return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`
+    return `${this.name}-${Date.now()}-${crypto.randomUUID().slice(0, 8)}`
   }

Note: If browser compatibility without crypto API is needed, the current implementation is acceptable.

packages/typescript/ai-ollama/src/adapters/text.ts (1)

300-307: Redundant conditional branches for image source handling.

Both branches of the conditional do the same thing - push part.source.value to images. This can be simplified.

           } else if (part.type === 'image') {
-            if (part.source.type === 'data') {
-              images.push(part.source.value)
-            } else {
-              images.push(part.source.value)
-            }
+            images.push(part.source.value)
           }
packages/typescript/ai-openai/src/adapters/text.ts (1)

104-111: Consider reducing verbose error logging in production.

The error logging outputs full stack traces and error objects to console.error. While useful for debugging, this could be noisy in production environments. Consider using a logging abstraction with configurable levels or reducing the verbosity.

     } catch (error: unknown) {
       const err = error as Error
-      console.error('>>> chatStream: Fatal error during response creation <<<')
-      console.error('>>> Error message:', err.message)
-      console.error('>>> Error stack:', err.stack)
-      console.error('>>> Full error:', err)
+      // Log error details for debugging
+      console.error('[OpenAI Adapter] chatStream error:', err.message)
       throw error
     }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5c9703c and 14c4297.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (11)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/image.ts (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai/src/adapters/base-chat-adapter.ts (1 hunks)
  • packages/typescript/ai/src/adapters/index.ts (1 hunks)
  • packages/typescript/ai/src/core/generate.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (4 hunks)
  • packages/typescript/ai/tests/generate-types.test-d.ts (1 hunks)
  • packages/typescript/ai/tests/generate.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai/tests/generate.test.ts
🧰 Additional context used
🧬 Code graph analysis (2)
packages/typescript/ai-gemini/src/adapters/text.ts (2)
packages/typescript/ai-gemini/src/utils/client.ts (4)
  • GeminiClientConfig (3-5)
  • createGeminiClient (10-14)
  • generateId (41-43)
  • getGeminiApiKeyFromEnv (20-36)
packages/typescript/ai-gemini/src/model-meta.ts (3)
  • GEMINI_MODELS (771-780)
  • GeminiChatModelProviderOptionsByName (815-864)
  • GeminiModelInputModalitiesByName (879-891)
packages/typescript/ai/src/core/generate.ts (6)
packages/typescript/ai/src/index.ts (5)
  • GenerateAdapter (7-7)
  • ai (5-5)
  • chatOptions (39-39)
  • chat (1-1)
  • convertZodToJsonSchema (26-26)
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)
  • ChatAdapter (50-102)
packages/typescript/ai/src/types.ts (11)
  • ChatAdapter (1100-1100)
  • EmbeddingAdapter (1100-1100)
  • SummarizeAdapter (1100-1100)
  • ModelMessage (226-237)
  • ChatOptions (554-595)
  • StreamChunk (685-693)
  • EmbeddingResult (735-743)
  • SummarizationResult (718-727)
  • ImageGenerationResult (783-796)
  • EmbeddingOptions (729-733)
  • SummarizationOptions (710-716)
packages/typescript/ai/src/adapters/base-embedding-adapter.ts (1)
  • EmbeddingAdapter (22-41)
packages/typescript/ai/src/adapters/base-summarize-adapter.ts (1)
  • SummarizeAdapter (22-41)
packages/typescript/ai/src/adapters/base-image-adapter.ts (1)
  • ImageAdapter (25-52)
🔇 Additional comments (21)
packages/typescript/ai-gemini/src/adapters/image.ts (5)

1-29: LGTM!

Imports are well-organized with proper separation of type imports using import type. Dependencies are correctly structured.


31-66: Clean adapter structure.

The class definition follows the established adapter pattern correctly. The empty GeminiImageConfig interface extending GeminiClientConfig provides a good extension point for future Gemini-specific configuration. The use of declare for type-narrowing properties is an appropriate TypeScript pattern.


68-88: Well-structured method with proper validation flow.

The validate-build-call-transform pattern is clean and maintainable. Input validation before the API call correctly prevents unnecessary network requests for invalid inputs.


141-176: LGTM!

The factory functions provide a clean API with both explicit key and environment-based instantiation patterns. The JSDoc comments with examples are helpful for consumers.


103-120: No changes needed. The GeneratedImage interface already defines b64Json as an optional property (b64Json?: string), which correctly allows the potentially undefined value from item.image?.imageBytes. The type is already type-safe.

packages/typescript/ai-gemini/src/adapters/text.ts (1)

435-454: Verify config spread order allows intentional overrides.

The providerOpts.generationConfig is spread after individual temperature, topP, and maxOutputTokens settings, allowing provider options to override common options. This may be intentional, but ensure this precedence aligns with the expected behavior.

If common options should take precedence, apply this diff:

      config: {
-       ...providerOpts,
-       temperature: options.options?.temperature,
-       topP: options.options?.topP,
-       maxOutputTokens: options.options?.maxTokens,
-       systemInstruction: options.systemPrompts?.join('\n'),
        ...((providerOpts as Record<string, unknown>).generationConfig as
          | Record<string, unknown>
          | undefined),
+       ...providerOpts,
+       temperature: options.options?.temperature,
+       topP: options.options?.topP,
+       maxOutputTokens: options.options?.maxTokens,
+       systemInstruction: options.systemPrompts?.join('\n'),
        tools: convertToolsToProviderFormat(options.tools),
      },
packages/typescript/ai/src/adapters/base-chat-adapter.ts (1)

1-102: Well-structured adapter interface with comprehensive generics.

The ChatAdapter interface provides excellent type-safety with its five generic parameters covering models, provider options, model-specific options, input modalities, and message metadata. The use of declare for type-only properties is a clean pattern for inference without runtime overhead.

packages/typescript/ai/src/adapters/index.ts (1)

1-44: Clean barrel export structure.

The index file provides a well-organized export surface with clear section comments. The AdapterKind and AnyAdapter union types offer convenient aggregations for consumers.

packages/typescript/ai-ollama/src/adapters/text.ts (1)

105-127: Solid adapter initialization with flexible client injection.

The constructor accepts either a host string or a pre-configured Ollama client, which is good for testing and flexibility. The default model fallback to 'llama3' is reasonable.

packages/typescript/ai/tests/generate-types.test-d.ts (4)

48-117: Well-structured mock adapters for type testing.

The mock adapter implementations correctly extend the base adapters with strict generics and provide minimal implementations sufficient for type-level testing. The empty async generator for chatStream is acceptable since vitest's type tests (test-d.ts files) only perform type checking without runtime execution.


119-334: Comprehensive type inference test coverage.

The tests thoroughly validate type inference across all adapter categories, including model validation, strict provider options enforcement, and cross-adapter option restrictions. The use of @ts-expect-error annotations correctly verifies that TypeScript rejects invalid configurations.


336-340: Acceptable workaround for type-test module imports.

The require('zod') pattern with the typeof import('zod') cast is a valid workaround for type-level tests to avoid module hoisting issues while maintaining proper type inference. The ESLint disable comment appropriately acknowledges this deviation.


483-541: Complete streaming behavior type coverage.

The tests correctly validate that the return type changes based on the stream option for summarize adapters, and that the stream option is properly rejected for chat and embedding adapters.

packages/typescript/ai-openai/src/adapters/text.ts (5)

53-74: Well-structured adapter class definition.

The adapter properly extends BaseChatAdapter with comprehensive generic parameters for model constraints, provider options, and metadata typing. The constructor correctly delegates client creation.


118-168: Solid structured output implementation.

The method correctly uses OpenAI's native JSON Schema response format with strict: true for reliable structured output. Error handling appropriately truncates long content in error messages for readability.


256-288: Good handling of response lifecycle events.

The code properly handles response created/incomplete/failed events, resets streaming state appropriately, and yields error chunks when the response contains errors or incomplete details.


461-481: Error chunk properly yielded after stream failure.

The catch block correctly yields an error chunk to notify consumers of stream failures. Note: The eventTypeCounts reference at line 467 is part of the unused variable issue noted earlier.


710-757: Well-documented factory functions.

The factory functions provide clear APIs for both explicit API key usage and environment variable detection. The JSDoc documentation with examples is helpful for developers.

packages/typescript/ai/src/types.ts (3)

1030-1097: LGTM! Tree-shakeable adapter type utilities added.

The new type utilities (ExtractModelsFromChatAdapter, ExtractModelsFromEmbeddingAdapter, ExtractModelsFromSummarizeAdapter, ExtractModelsFromAnyAdapter, and ChatOptionsForChatAdapter) provide proper type inference for the new tree-shakeable adapter architecture.

Note: There's intentional duplication with existing types like ChatStreamOptionsForModel (lines 947-997), which makes sense for migration purposes. Consider adding a comment explaining this is part of the adapter architecture evolution.


1100-1100: LGTM! Adapter types re-exported for public API.

Re-exporting the adapter types from the ./adapters module makes them available as part of the public API surface, which aligns with the tree-shakeable architecture goals mentioned in the PR objectives.


4-9: All four adapter types (AnyAdapter, ChatAdapter, EmbeddingAdapter, SummarizeAdapter) are properly exported from the ./adapters module. The import statement is correct and valid.

Comment on lines +201 to +202
const toolCallId =
functionCall.name || `call_${Date.now()}_${nextToolIndex}`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Tool call ID collision: function name is not unique across multiple calls.

Using functionCall.name directly as the toolCallId causes collisions when the same function is called multiple times in one stream. Each tool call must have a unique identifier.

Apply this diff to always generate unique IDs:

            const toolCallId =
-             functionCall.name || `call_${Date.now()}_${nextToolIndex}`
+             `call_${Date.now()}_${nextToolIndex}`

Make the same change at line 272 for the UNEXPECTED_TOOL_CALL branch:

                const toolCallId =
-                 functionCall.name || `call_${Date.now()}_${nextToolIndex}`
+                 `call_${Date.now()}_${nextToolIndex}`

Also applies to: 271-272

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 201-202 (and
also update the UNEXPECTED_TOOL_CALL branch at lines 271-272), the current
assignment const toolCallId = functionCall.name ||
`call_${Date.now()}_${nextToolIndex}` can collide because functionCall.name is
reused; change it to always produce a unique id by appending a timestamp and
index to the name (e.g. use the name if present but concat
`_${Date.now()}_${nextToolIndex}`, or if no name use the existing `call_`
prefix), and apply the exact same unique-id construction at the
UNEXPECTED_TOOL_CALL location so every tool call id is guaranteed unique.

Comment on lines +250 to +261
} else if (chunk.data) {
accumulatedContent += chunk.data
yield {
type: 'content',
id: generateId(this.name),
model,
timestamp,
delta: chunk.data,
content: accumulatedContent,
role: 'assistant',
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove dead code handling non-existent chunk.data property.

The chunk.data property does not exist on GenerateContentResponse in the @google/genai types. This branch will never execute.

Apply this diff to remove the dead code:

        }
-     } else if (chunk.data) {
-       accumulatedContent += chunk.data
-       yield {
-         type: 'content',
-         id: generateId(this.name),
-         model,
-         timestamp,
-         delta: chunk.data,
-         content: accumulatedContent,
-         role: 'assistant',
-       }
      }

      if (chunk.candidates?.[0]?.finishReason) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} else if (chunk.data) {
accumulatedContent += chunk.data
yield {
type: 'content',
id: generateId(this.name),
model,
timestamp,
delta: chunk.data,
content: accumulatedContent,
role: 'assistant',
}
}
} else if (chunk.candidates?.[0]?.finishReason) {
yield {
type: 'finish',
id: generateId(this.name),
model,
timestamp,
finishReason: chunk.candidates[0].finishReason,
role: 'assistant',
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 250 to 261,
there's a dead code branch checking for chunk.data which does not exist on
GenerateContentResponse; remove that entire else-if block (the branch that
checks chunk.data and yields content) so the function only handles valid chunk
shapes returned by @google/genai, and ensure surrounding formatting/flow remains
correct after deletion.

Comment on lines +393 to +406
let parsedArgs: Record<string, unknown> = {}
try {
parsedArgs = toolCall.function.arguments
? (JSON.parse(toolCall.function.arguments) as Record<
string,
unknown
>)
: {}
} catch {
parsedArgs = toolCall.function.arguments as unknown as Record<
string,
unknown
>
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unsafe type casting after failed JSON parse.

When JSON parsing fails, the code casts toolCall.function.arguments to Record<string, unknown>, but if arguments is a string (which it likely is if parsing failed), this cast is incorrect and could cause runtime errors when Gemini processes the malformed args object.

Consider logging a warning or throwing an error instead:

          let parsedArgs: Record<string, unknown> = {}
          try {
            parsedArgs = toolCall.function.arguments
              ? (JSON.parse(toolCall.function.arguments) as Record<
                  string,
                  unknown
                >)
              : {}
          } catch {
-           parsedArgs = toolCall.function.arguments as unknown as Record<
-             string,
-             unknown
-           >
+           console.warn(
+             `Failed to parse tool call arguments for ${toolCall.function.name}:`,
+             toolCall.function.arguments,
+           )
+           // Skip this tool call or use empty args
+           parsedArgs = {}
          }
🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/text.ts around lines 393 to 406,
the catch block unsafely casts toolCall.function.arguments to Record<string,
unknown> after a JSON.parse failure (likely leaving a string) which can break
downstream code; change the catch to handle the parse failure safely by: detect
the runtime type of toolCall.function.arguments, if it's a string log a warning
(or record a structured error) and either leave parsedArgs as an empty object or
attempt a safer recovery (e.g., wrap the raw string under a named field like {
raw: "<original-string>" }); alternatively throw a descriptive error if
malformed args must be fatal — ensure the chosen behavior is logged and does not
perform an invalid type cast.

Comment on lines 355 to 374
private mapCommonOptionsToOllama(options: ChatOptions): ChatRequest {
const model = options.model || this.defaultModel
const providerOptions = options.providerOptions as
| OllamaTextProviderOptions
| undefined

const ollamaOptions = {
temperature: options.options?.temperature,
top_p: options.options?.topP,
num_predict: options.options?.maxTokens,
...providerOptions,
}

return {
model,
options: ollamaOptions,
messages: this.formatMessages(options.messages),
tools: this.convertToolsToOllamaFormat(options.tools),
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check Ollama Message type for system role
rg -n "role.*system" --type=ts packages/typescript/ai-ollama/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check the full text adapter implementation and related types
fd "text.ts" packages/typescript/ai-ollama/src/adapters/ | head -20

Repository: TanStack/ai

Length of output: 105


🏁 Script executed:

#!/bin/bash
# Search for systemPrompts usage in adapters to compare with other adapters
rg -n "systemPrompts" --type=ts packages/typescript/ai-*/src/adapters/

Repository: TanStack/ai

Length of output: 388


🏁 Script executed:

#!/bin/bash
# Check the ChatRequest and ChatOptions interfaces
rg -n "interface ChatRequest|type ChatRequest|interface ChatOptions|type ChatOptions" --type=ts packages/typescript/ai-ollama/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Search for Ollama message and chat type definitions
rg -n "type.*Message|interface.*Message" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 159


🏁 Script executed:

#!/bin/bash
# Check the full text.ts file to see how messages are formatted and what Ollama expects
wc -l packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 109


🏁 Script executed:

#!/bin/bash
# Search for formatMessages implementation
rg -n "formatMessages" --type=ts packages/typescript/ai-ollama/src/adapters/text.ts -A 10

Repository: TanStack/ai

Length of output: 819


🏁 Script executed:

#!/bin/bash
# Check Ollama's ChatRequest and related types
rg -n "ChatRequest|OllamaMessage" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 633


🏁 Script executed:

#!/bin/bash
# Check Message type definition
rg -n "export.*Message|type Message" --type=ts packages/typescript/ai-ollama/src/message-types.ts -A 5

Repository: TanStack/ai

Length of output: 247


🏁 Script executed:

#!/bin/bash
# Get the Message type/interface definition more thoroughly
sed -n '1,100p' packages/typescript/ai-ollama/src/message-types.ts

Repository: TanStack/ai

Length of output: 1611


🏁 Script executed:

#!/bin/bash
# Check ChatRequest type definition - look in message-types.ts
rg -n "ChatRequest" --type=ts packages/typescript/ai-ollama/src/message-types.ts -B 2 -A 10

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check the ollama-adapter.ts mapCommonOptionsToOllama for comparison
sed -n '456,490p' packages/typescript/ai-ollama/src/ollama-adapter.ts

Repository: TanStack/ai

Length of output: 1039


🏁 Script executed:

#!/bin/bash
# Check imports in text.ts to see where ChatRequest and Message come from
head -30 packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 756


🏁 Script executed:

#!/bin/bash
# Search for ChatRequest type definition across all ai packages
rg -n "type ChatRequest|interface ChatRequest" --type=ts packages/typescript/

Repository: TanStack/ai

Length of output: 136


🏁 Script executed:

#!/bin/bash
# Search for Message type definition (likely in ai-core or ollama)
rg -n "export.*type Message|export.*interface Message" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Search for ollama package information - check if it's in package.json and what version
rg -n "ollama" packages/typescript/ai-ollama/package.json

Repository: TanStack/ai

Length of output: 185


🏁 Script executed:

#!/bin/bash
# Try to find if there are type definitions or documentation about ollama Message type
find packages/typescript/ai-ollama -name "*.d.ts" -o -name "*.ts" | xargs grep -l "Message.*role" 2>/dev/null | head -5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check how other adapters (like openai, anthropic) handle system prompts vs what Ollama does
rg -n "system" --type=ts packages/typescript/ai-anthropic/src/adapters/text.ts -B 2 -A 2 | head -40

Repository: TanStack/ai

Length of output: 379


🌐 Web query:

Ollama API system message role support chat request

💡 Result:

Yes — Ollama's chat API accepts a "system" role in the messages array. The message object’s role can be "system", "user", "assistant" (and "tool" for tool-enabled flows). See the API docs for the POST /api/chat endpoint. [1][2]

Sources:

  • Ollama docs — Generate a chat message (POST /api/chat). [1]
  • Ollama API overview (messages/role description). [2]

🏁 Script executed:

#!/bin/bash
# Check if there are other adapters that prepend system messages to the messages array
rg -n "role.*system|'system'" --type=ts packages/typescript/ai-*/src/adapters/text.ts -B 3 -A 3

Repository: TanStack/ai

Length of output: 611


🏁 Script executed:

#!/bin/bash
# Look for Ollama types more carefully - check if it's defined elsewhere or in the ollama package
rg -n "system.*role|role.*system" --type=ts packages/typescript/ai-ollama/src/ -i

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check the exact structure being returned in mapCommonOptionsToOllama - the full function including return
sed -n '355,374p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 662


🏁 Script executed:

#!/bin/bash
# Search for any existing system prompt handling in the Ollama adapter
rg -n "systemPrompts" --type=ts packages/typescript/ai-ollama/src/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Look at the full formatMessages implementation to understand message structure
sed -n '291,320p' packages/typescript/ai-ollama/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1046


🏁 Script executed:

#!/bin/bash
# Search for how messages are constructed elsewhere in the codebase
rg -n "role:" --type=ts packages/typescript/ai-ollama/src/ -B 1 -A 1

Repository: TanStack/ai

Length of output: 1075


🏁 Script executed:

#!/bin/bash
# Check the ChatResponse or message examples in the adapter
rg -n "role\|content" --type=ts packages/typescript/ai-ollama/src/adapters/text.ts | head -20

Repository: TanStack/ai

Length of output: 37


systemPrompts not forwarded to Ollama.

The mapCommonOptionsToOllama function doesn't include options.systemPrompts in the returned ChatRequest. Unlike other adapters (OpenAI, Anthropic, Gemini) that handle system prompts, this adapter drops them entirely. Ollama's API supports system role messages, so system prompts should be prepended to the messages array.

   private mapCommonOptionsToOllama(options: ChatOptions): ChatRequest {
     const model = options.model || this.defaultModel
     const providerOptions = options.providerOptions as
       | OllamaTextProviderOptions
       | undefined

     const ollamaOptions = {
       temperature: options.options?.temperature,
       top_p: options.options?.topP,
       num_predict: options.options?.maxTokens,
       ...providerOptions,
     }

+    // Prepend system message if systemPrompts provided
+    const messagesWithSystem = options.systemPrompts?.length
+      ? [
+          { role: 'system' as const, content: options.systemPrompts.join('\n') },
+          ...this.formatMessages(options.messages),
+        ]
+      : this.formatMessages(options.messages)
+
     return {
       model,
       options: ollamaOptions,
-      messages: this.formatMessages(options.messages),
+      messages: messagesWithSystem,
       tools: this.convertToolsToOllamaFormat(options.tools),
     }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private mapCommonOptionsToOllama(options: ChatOptions): ChatRequest {
const model = options.model || this.defaultModel
const providerOptions = options.providerOptions as
| OllamaTextProviderOptions
| undefined
const ollamaOptions = {
temperature: options.options?.temperature,
top_p: options.options?.topP,
num_predict: options.options?.maxTokens,
...providerOptions,
}
return {
model,
options: ollamaOptions,
messages: this.formatMessages(options.messages),
tools: this.convertToolsToOllamaFormat(options.tools),
}
}
private mapCommonOptionsToOllama(options: ChatOptions): ChatRequest {
const model = options.model || this.defaultModel
const providerOptions = options.providerOptions as
| OllamaTextProviderOptions
| undefined
const ollamaOptions = {
temperature: options.options?.temperature,
top_p: options.options?.topP,
num_predict: options.options?.maxTokens,
...providerOptions,
}
// Prepend system message if systemPrompts provided
const messagesWithSystem = options.systemPrompts?.length
? [
{ role: 'system' as const, content: options.systemPrompts.join('\n') },
...this.formatMessages(options.messages),
]
: this.formatMessages(options.messages)
return {
model,
options: ollamaOptions,
messages: messagesWithSystem,
tools: this.convertToolsToOllamaFormat(options.tools),
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai-ollama/src/adapters/text.ts around lines 355 to 374,
the function mapCommonOptionsToOllama drops options.systemPrompts; update it to
prepend any provided systemPrompts (properly formatted) to the messages array
before assigning to the ChatRequest. Specifically, call this.formatMessages on
options.systemPrompts (if present) and concat/prepend those entries to
this.formatMessages(options.messages), ensuring null/undefined checks so
messages is always an array, then return the combined messages in the
ChatRequest.

Comment on lines 39 to 44
// Union type of all adapters
export type AnyAdapter =
| ChatAdapter<any, any, any, any, any>
| EmbeddingAdapter<any, any>
| SummarizeAdapter<any, any>
| ImageAdapter<any, any, any>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n "export interface ImageAdapter<" --type=ts -A 10 packages/typescript/ai/src/adapters/base-image-adapter.ts

Repository: TanStack/ai

Length of output: 614


Update ImageAdapter in AnyAdapter union to include missing 4th generic parameter.

ImageAdapter has 4 generic type parameters (TModels, TProviderOptions, TModelProviderOptionsByName, TModelSizeByName), but is currently defined in the AnyAdapter union with only 3 type arguments. Update line 44 to: ImageAdapter<any, any, any, any>

🤖 Prompt for AI Agents
In packages/typescript/ai/src/adapters/index.ts around lines 39 to 44, the
AnyAdapter union lists ImageAdapter with only three generic parameters; update
that union member to provide the missing fourth generic (use ImageAdapter<any,
any, any, any>) so the type signature matches the ImageAdapter definition and
resolves the generic arity mismatch.

Comment on lines 525 to 540
async function generateEmbedding(
options: GenerateEmbeddingOptions<
EmbeddingAdapter<ReadonlyArray<string>, object>,
string
>,
): Promise<EmbeddingResult> {
const { adapter, model, input, dimensions } = options

const embeddingOptions: EmbeddingOptions = {
model,
input,
dimensions,
}

return adapter.createEmbeddings(embeddingOptions)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if EmbeddingOptions includes providerOptions
rg -n "interface EmbeddingOptions" --type=ts -A 10 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 317


🏁 Script executed:

#!/bin/bash
# Check GenerateEmbeddingOptions structure
rg -n "interface GenerateEmbeddingOptions" --type=ts -A 15 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Search for summarize functions to see the pattern mentioned
rg -n "function generateSummarize|async function.*Summarize" --type=ts -A 15 packages/typescript/ai/src/core/generate.ts | head -60

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Search for GenerateEmbeddingOptions more broadly
rg "GenerateEmbeddingOptions" --type=ts -B 2 -A 10 packages/typescript/ai/src/

Repository: TanStack/ai

Length of output: 7226


🏁 Script executed:

#!/bin/bash
# Check the structure of generate.ts to understand the function and types
rg -n "generateEmbedding|GenerateSummarize" --type=ts packages/typescript/ai/src/core/generate.ts | head -20

Repository: TanStack/ai

Length of output: 578


🏁 Script executed:

#!/bin/bash
# Find the generateSummarize function and see how it handles providerOptions
rg -n "async function generateSummarize" --type=ts -A 20 packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check GenerateSummarizeOptions to see if it has providerOptions
rg -n "interface GenerateSummarizeOptions" --type=ts -A 15 packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 778


🏁 Script executed:

#!/bin/bash
# Search for the summarize function implementation
rg -n "function generateSummarize" --type=ts -A 20 packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check what SummarizationOptions looks like in types.ts
rg -n "interface SummarizationOptions" --type=ts -A 10 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 337


🏁 Script executed:

#!/bin/bash
# Get the full generateEmbedding function
sed -n '525,540p' packages/typescript/ai/src/core/generate.ts

Repository: TanStack/ai

Length of output: 425


🏁 Script executed:

#!/bin/bash
# Search more broadly for summarize function with different pattern
rg -n "generateSummarize" --type=ts -B 2 -A 20 packages/typescript/ai/src/core/generate.ts | grep -A 20 "^[0-9]*-async function"

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Check the full EmbeddingOptions and EmbeddingAdapter definitions
rg -n "interface EmbeddingOptions|type EmbeddingOptions" --type=ts -A 15 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 393


🏁 Script executed:

#!/bin/bash
# Check EmbeddingAdapter to understand what parameters createEmbeddings accepts
rg -n "interface EmbeddingAdapter|type EmbeddingAdapter" --type=ts -A 20 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

#!/bin/bash
# Search for EmbeddingAdapter definition more broadly
rg "interface EmbeddingAdapter|type EmbeddingAdapter" --type=ts -B 2 -A 15 packages/typescript/ai/src/

Repository: TanStack/ai

Length of output: 4453


🏁 Script executed:

#!/bin/bash
# Get the complete EmbeddingAdapter interface including the createEmbeddings method
rg -n "interface EmbeddingAdapter" --type=ts -A 40 packages/typescript/ai/src/adapters/base-embedding-adapter.ts

Repository: TanStack/ai

Length of output: 2024


providerOptions not forwarded in generateEmbedding and EmbeddingOptions interface incomplete.

GenerateEmbeddingOptions exposes providerOptions but it's not passed to the adapter. The proposed forwarding alone is insufficient—EmbeddingOptions interface must also be updated to include providerOptions field, as the createEmbeddings method currently only accepts EmbeddingOptions which lacks this field.

-interface EmbeddingOptions {
+interface EmbeddingOptions {
   model: string
   input: string | Array<string>
   dimensions?: number
+  providerOptions?: object
 }

Then update generateEmbedding:

 async function generateEmbedding(
   options: GenerateEmbeddingOptions<
     EmbeddingAdapter<ReadonlyArray<string>, object>,
     string
   >,
 ): Promise<EmbeddingResult> {
-  const { adapter, model, input, dimensions } = options
+  const { adapter, model, input, dimensions, providerOptions } = options

   const embeddingOptions: EmbeddingOptions = {
     model,
     input,
     dimensions,
+    providerOptions,
   }

   return adapter.createEmbeddings(embeddingOptions)
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai/src/core/generate.ts around lines 525-540,
GenerateEmbeddingOptions exposes providerOptions but the code doesn't forward it
and the EmbeddingOptions interface lacks a providerOptions field; update the
EmbeddingOptions interface to include providerOptions with the appropriate type
(matching GenerateEmbeddingOptions['providerOptions'] or a shared
ProviderOptions type) and then modify generateEmbedding to include
providerOptions when constructing embeddingOptions so adapter.createEmbeddings
receives providerOptions as part of EmbeddingOptions.

Comment on lines +569 to +575
/**
* Zod schema for structured output.
* When provided, the adapter should use the provider's native structured output API
* to ensure the response conforms to this schema.
* The schema will be converted to JSON Schema format before being sent to the provider.
*/
outputSchema?: z.ZodType
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clarify relationship between outputSchema and output fields.

The ChatOptions interface now has both output?: TOutput (line 568) and outputSchema?: z.ZodType (line 575). This creates ambiguity:

  • output already supports structured output via ResponseFormat with JSON schema
  • outputSchema provides a direct Zod schema approach
  • No clear precedence or validation if both are provided

Consider one of these approaches:

  1. Consolidate into a single field that accepts both formats
  2. Document clear precedence rules (e.g., outputSchema overrides output)
  3. Add runtime validation to prevent both being set simultaneously

Apply this diff to add a JSDoc note clarifying the relationship:

   output?: TOutput
   /**
    * Zod schema for structured output.
    * When provided, the adapter should use the provider's native structured output API
    * to ensure the response conforms to this schema.
    * The schema will be converted to JSON Schema format before being sent to the provider.
+   * 
+   * Note: If both 'output' and 'outputSchema' are provided, 'outputSchema' takes precedence.
+   * Consider using only one approach to avoid confusion.
    */
   outputSchema?: z.ZodType
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* Zod schema for structured output.
* When provided, the adapter should use the provider's native structured output API
* to ensure the response conforms to this schema.
* The schema will be converted to JSON Schema format before being sent to the provider.
*/
outputSchema?: z.ZodType
/**
* Zod schema for structured output.
* When provided, the adapter should use the provider's native structured output API
* to ensure the response conforms to this schema.
* The schema will be converted to JSON Schema format before being sent to the provider.
*
* Note: If both 'output' and 'outputSchema' are provided, 'outputSchema' takes precedence.
* Consider using only one approach to avoid confusion.
*/
outputSchema?: z.ZodType
🤖 Prompt for AI Agents
In packages/typescript/ai/src/types.ts around lines 569 to 575, the relationship
between the existing output?: TOutput field and the new outputSchema?: z.ZodType
is ambiguous; update the JSDoc above outputSchema to clearly state the
precedence and allowed combinations (e.g., that outputSchema is an alternative
to output and will take precedence if both are provided) and add a note that
callers should not supply both simultaneously — additionally, implement a
runtime guard where ChatOptions are constructed/validated to throw or return an
error if both output and outputSchema are set to prevent ambiguous behavior.

Comment on lines +791 to +795
usage?: {
inputTokens?: number
outputTokens?: number
totalTokens?: number
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Standardize token usage field naming.

The usage object in ImageGenerationResult uses inputTokens and outputTokens, while other result types use different naming:

  • DoneStreamChunk (line 644): uses promptTokens and completionTokens
  • EmbeddingResult (line 740): uses promptTokens

Consider standardizing to one naming convention across all usage types for consistency.

Apply this diff to align with existing conventions:

   /** Token usage information (if available) */
   usage?: {
-    inputTokens?: number
-    outputTokens?: number
+    promptTokens?: number
+    completionTokens?: number
     totalTokens?: number
   }
🤖 Prompt for AI Agents
packages/typescript/ai/src/types.ts around lines 791 to 795: the
ImageGenerationResult usage object currently uses inputTokens/outputTokens which
is inconsistent with other types that use promptTokens and completionTokens;
rename usage.inputTokens -> usage.promptTokens and usage.outputTokens ->
usage.completionTokens (keep totalTokens) and update any related type references
and consumers/serializers/deserializers to use the new field names to maintain
consistency across result types.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (5)
packages/typescript/ai-anthropic/src/adapters/text.ts (5)

92-121: Avoid emitting error.code as the literal string "undefined" in chatStream

If both err.code and err.status are absent, String(err.status) becomes "undefined", which is not useful to consumers.

Consider a safer fallback:

       yield {
         type: 'error',
         id: generateId(this.name),
         model: options.model,
         timestamp: Date.now(),
         error: {
           message: err.message || 'Unknown error occurred',
-          code: err.code || String(err.status),
+          code:
+            err.code ??
+            (err.status != null ? String(err.status) : 'unknown'),
         },
       }

129-178: Structured output silently returns data: null on JSON parse failure

In structuredOutput, a JSON.parse error is caught and coerced to parsed = null, so callers just see data: null with no indication that parsing failed. This can mask real schema/format issues.

Suggest failing fast with a descriptive error (or otherwise surfacing parse failure):

-      let parsed: unknown = null
-      try {
-        parsed = JSON.parse(text)
-      } catch {
-        parsed = null
-      }
+      let parsed: unknown
+      try {
+        parsed = JSON.parse(text)
+      } catch (err) {
+        throw new Error(
+          `Failed to parse structured output as JSON: ${
+            err instanceof Error ? err.message : String(err)
+          }`,
+        )
+      }

This will then be wrapped by the outer catch and surfaced as a structured output generation error instead of a silent null.


181-242: Clarify precedence between systemPrompts and providerOptions.system

validProviderOptions includes 'system' and is spread after systemPrompts:

system: options.systemPrompts?.join('\n'),
tools: tools,
...validProviderOptions,

This means a providerOptions.system value will override the higher‑level systemPrompts, which is easy to miss and currently undocumented.

If you want systemPrompts to always win (which is what most consumers will expect), either:

  • Remove 'system' from validKeys, or
  • Spread validProviderOptions before the explicit system field:
-    const requestParams: InternalTextProviderOptions = {
-      model: options.model,
-      max_tokens: maxTokens,
-      temperature: options.options?.temperature,
-      top_p: options.options?.topP,
-      messages: formattedMessages,
-      system: options.systemPrompts?.join('\n'),
-      tools: tools,
-      ...validProviderOptions,
-    }
+    const requestParams: InternalTextProviderOptions = {
+      model: options.model,
+      max_tokens: maxTokens,
+      temperature: options.options?.temperature,
+      top_p: options.options?.topP,
+      messages: formattedMessages,
+      ...validProviderOptions,
+      system: options.systemPrompts?.join('\n'),
+      tools: tools,
+    }

If provider‑level override is intentional, it would help to document and add a test capturing that precedence.


408-424: content_block_* handling should use the event index instead of a global currentToolIndex

processAnthropicStream tracks tool calls via a currentToolIndex counter that increments on each content_block_start of type tool_use and is then reused for input_json_delta and content_block_stop. However, Anthropic’s streaming events already include an index field on content_block_start, content_block_delta, and content_block_stop.

With mixed content blocks (e.g., text, thinking, tool_use) the current approach can misassociate input_json_delta/content_block_stop events with the wrong tool, leading to spurious or mis‑indexed tool_call chunks.

You can avoid this by keying off event.index instead of a mutable currentToolIndex:

-    const toolCallsMap = new Map<
-      number,
-      { id: string; name: string; input: string }
-    >()
-    let currentToolIndex = -1
+    const toolCallsMap = new Map<
+      number,
+      { id: string; name: string; input: string }
+    >()

@@
-        if (event.type === 'content_block_start') {
-          if (event.content_block.type === 'tool_use') {
-            currentToolIndex++
-            toolCallsMap.set(currentToolIndex, {
+        if (event.type === 'content_block_start') {
+          if (event.content_block.type === 'tool_use') {
+            toolCallsMap.set(event.index, {
               id: event.content_block.id,
               name: event.content_block.name,
               input: '',
             })
@@
-          } else if (event.delta.type === 'input_json_delta') {
-            const existing = toolCallsMap.get(currentToolIndex)
+          } else if (event.delta.type === 'input_json_delta') {
+            const existing = toolCallsMap.get(event.index)
             if (existing) {
               existing.input += event.delta.partial_json
@@
-              index: currentToolIndex,
+              index: event.index,
             }
           }
         }
-        } else if (event.type === 'content_block_stop') {
-          const existing = toolCallsMap.get(currentToolIndex)
+        } else if (event.type === 'content_block_stop') {
+          const existing = toolCallsMap.get(event.index)
           if (existing && existing.input === '') {
@@
-              index: currentToolIndex,
+              index: event.index,
             }
           }
         }

This keeps tool-call tracking aligned with Anthropic’s own block indices and avoids cross‑contamination between different content blocks.

Also applies to: 451-471, 473-491


554-565: Duplicate error.code "undefined" issue in stream error handler

Same concern as in chatStream: code: err.code || String(err.status) can yield "undefined" when both are missing. Recommend aligning the fallback with the earlier suggestion:

       yield {
         type: 'error',
         id: genId(),
         model: model,
         timestamp,
         error: {
           message: err.message || 'Unknown error occurred',
-          code: err.code || String(err.status),
+          code:
+            err.code ??
+            (err.status != null ? String(err.status) : 'unknown'),
         },
       }
🧹 Nitpick comments (1)
packages/typescript/ai-anthropic/src/adapters/text.ts (1)

492-552: Make usage handling more defensive and avoid potential double done events

Two small robustness points in the message_delta / message_stop handling:

  1. Guard event.usage access:
    If Anthropic ever omits usage on the final message_delta (even with stop_reason present), the current code will throw on event.usage.input_tokens. A cheap defensive improvement:
-                  usage: {
-                    promptTokens: event.usage.input_tokens || 0,
-                    completionTokens: event.usage.output_tokens || 0,
-                    totalTokens:
-                      (event.usage.input_tokens || 0) +
-                      (event.usage.output_tokens || 0),
-                  },
+                  usage: {
+                    promptTokens: event.usage?.input_tokens ?? 0,
+                    completionTokens: event.usage?.output_tokens ?? 0,
+                    totalTokens:
+                      (event.usage?.input_tokens ?? 0) +
+                      (event.usage?.output_tokens ?? 0),
+                  },
  1. Consider emitting a single done event per stream:
    Depending on Anthropic’s exact event sequence, you might get both a message_delta with stop_reason and a later message_stop, which will currently yield two done events. If the consumer expects a single terminal chunk, you could track a let doneEmitted = false and skip subsequent done emissions once set.

This is more of a resiliency/clarity tweak than a correctness blocker.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 14c4297 and 935111d.

📒 Files selected for processing (1)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
🔇 Additional comments (4)
packages/typescript/ai-anthropic/src/adapters/text.ts (4)

47-90: Adapter config and client wiring look solid

AnthropicTextConfig as a thin alias plus AnthropicTextAdapter extending BaseChatAdapter and initializing this.client via createAnthropicClient is straightforward and consistent with the rest of the adapters. No changes requested here.


244-308: Content-part conversion to Anthropic blocks looks consistent

The handling of text, image, and document parts (with metadata mapping and safe stripping of mediaType for images) and the explicit rejection of audio/video is clear and matches Anthropic’s capabilities. The exhaustive never check on the default branch is a nice type‑safety touch.


310-398: Message formatting for tools and multi-part user content looks reasonable

The mapping logic:

  • Converts role: 'tool' + toolCallId into Anthropic tool_result blocks on a user message.
  • Packages assistant tool calls as tool_use blocks (with best‑effort JSON parsing of function.arguments).
  • Properly converts multi‑part user content via convertContentPartToAnthropic.
  • Falls back to string or converted content array in other cases.

This matches the usual TanStack AI message‑to‑provider mapping; I don’t see any correctness issues here.


574-589: Factory helpers are thin and ergonomic

createAnthropicText and anthropicText are simple, typed helpers that mirror the other adapters’ factory pattern and keep env‑key detection nicely encapsulated. No issues from my side.

* fix: refactoring ai for more activities

* smoke tests passing

* woot, all the test stuff is working

* dev panel updates for images, summarization, one shot and structured

* enhancing smoke tests

* fixing tests

* adding grok

* last minute tests

* Refactor imports in documentation and examples to use named imports for `ai`

- Updated all instances of `import ai from "@tanstack/ai"` to `import { ai } from "@tanstack/ai"` across various documentation files, guides, and examples.
- Ensured consistency in import statements for better clarity and adherence to best practices.

* ci: apply automated fixes

* fix typesafety on ai

* ci: apply automated fixes

* cleanup types

* ci: apply automated fixes

* remove grok

* ci: apply automated fixes

* fix provenence?

* update deps

* fix tests

---------

Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (14)
docs/reference/functions/textOptions.md (1)

6-13: Fix incorrect "Defined in" link—function is in activities/text/index.ts:1211, not utilities/chat-options.ts.

The documented source location is wrong. textOptions is exported from packages/typescript/ai/src/activities/text/index.ts starting at line 1211, not from a non-existent utilities/chat-options.ts file. Update the link accordingly.

docs/guides/tool-approval.md (1)

58-74: Fix the adapter function name in the code snippet: use openai() instead of openaiText().

The TanStack AI package exports openai from @tanstack/ai-openai, not openaiText(). The corrected snippet should be:

import { ai, toStreamResponse } from "@tanstack/ai";
import { openai } from "@tanstack/ai-openai";
import { sendEmail } from "./tools";

export async function POST(request: Request) {
  const { messages } = await request.json();

  const stream = ai({
    adapter: openai(),
    messages,
    model: "gpt-4o",
    tools: [sendEmail],
  });

  return toStreamResponse(stream);
}

The openai() adapter defaults to reading OPENAI_API_KEY from environment variables (server-side only). If docs need to clarify auth setup, consider adding a note that the environment variable must be set before deployment.

examples/ts-group-chat/chat-server/capnweb-rpc.ts (1)

154-199: Bug risk: joinChat() registers clients, but leaveChat() doesn’t unregister—stale “connected users” likely.
This can keep usernames in clients (and queues) after a user leaves without closing the socket. Consider cleaning up in leaveChat() (and possibly also handling duplicate usernames in joinChat()).

   async leaveChat() {
     if (!this.currentUsername) return
 
-    console.log(`${this.currentUsername} is leaving the chat`)
-    await globalChat.removeUser(this.currentUsername)
-    this.currentUsername = null
+    const username = this.currentUsername
+    console.log(`${username} is leaving the chat`)
+    await globalChat.removeUser(username)
+    clients.delete(username)
+    userMessageQueues.delete(username)
+    this.currentUsername = null
 
     return {
       message: 'Successfully left the chat',
     }
   }
examples/ts-svelte-chat/src/routes/api/chat/+server.ts (1)

105-119: Remove as any type casts to preserve type safety.

The migration to ai() is correct, but lines 106-107 use as any to bypass type checking. This defeats TypeScript's type safety and could mask incompatibilities between the text adapters and the ai() function.

Consider investigating the type mismatch. Possible solutions:

  1. If the adapter types are narrower than ai() expects, update the ai() function signature to accept the union of text adapter types
  2. If the types should already be compatible, refine the switch statement's type inference or use explicit type annotations
  3. If types are still being refined in this PR, add a TODO comment indicating this is temporary

Apply this approach to eliminate the type casts:

 const stream = ai({
-  adapter: adapter as any,
-  model: selectedModel as any,
+  adapter,
+  model: selectedModel,
   tools: [
     getGuitars,
     recommendGuitarToolDef,

If type errors arise, they will guide you to the proper type definitions needed.

docs/adapters/ollama.md (1)

98-124: Docs examples use messages without defining it
In “With Tools” and “Provider Options” examples, messages isn’t declared in-snippet, which makes copy/paste fail.

 import { ai, toolDefinition } from "@tanstack/ai";
 import { ollamaText } from "@tanstack/ai-ollama";
 import { z } from "zod";

 const adapter = ollamaText();
+const messages = [{ role: "user", content: "Hello!" }];

 const getLocalDataDef = toolDefinition({
   name: "get_local_data",
   description: "Get data from local storage",
   inputSchema: z.object({
     key: z.string(),
   }),
 });

Also applies to: 132-147

examples/README.md (2)

311-321: Inconsistent adapter import - should use openaiText for consistency.

The import on line 312 uses the legacy openai adapter, but per the PR objectives, the tree-shakeable openaiText adapter should be used instead to align with the new API surface demonstrated elsewhere.

 import { ai, toStreamResponse } from '@tanstack/ai'
-import { openai } from '@tanstack/ai-openai'
+import { openaiText } from '@tanstack/ai-openai'
 
 const stream = ai({
-  adapter: openai(),
+  adapter: openaiText(),
   model: 'gpt-4o',
   messages,
 })

385-391: Inconsistent API call - should use ai() instead of chat().

This example still uses the old chat({...}) function call, but should be updated to ai({...}) to match the PR's migration pattern.

-const stream = chat({
+const stream = ai({
   adapter: openai(),
   model: 'gpt-4o',
   messages,
   tools: [weatherTool], // SDK executes these automatically
 })
docs/reference/type-aliases/TextStreamOptionsUnion.md (1)

1-18: Fix incorrect source file line reference.

The documentation references line 823, but TextStreamOptionsUnion is actually defined at line 890 in packages/typescript/ai/src/types.ts. Update the link to point to the correct line.

docs/getting-started/quick-start.md (1)

179-194: Tools snippet calls ai(...) without importing it.

-import { toolDefinition } from '@tanstack/ai'
+import { ai, toolDefinition } from '@tanstack/ai'
docs/guides/client-tools.md (1)

235-239: Tool state name completed may be inconsistent with other docs/examples (complete) — please align to the actual enum values.

docs/guides/tools.md (1)

282-288: Hybrid tools example uses two tools: keys in one object literal (invalid / misleading). Split into two separate examples.

-ai({
-  adapter: openaiText(),
-  messages,
-  tools: [addToCartDef], // Client will execute, or
-  tools: [addToCartServer], // Server will execute
-});
+// Client will execute (pass definition)
+ai({ adapter: openaiText(), messages, tools: [addToCartDef] });
+
+// Server will execute (pass server implementation)
+ai({ adapter: openaiText(), messages, tools: [addToCartServer] });
docs/api/ai.md (2)

47-61: Inconsistent section header with actual API usage.

The section header says summarize(options) but the code example uses ai({ adapter: openaiSummarize(), ... }). Either update the header to reflect the unified ai() entry point or clarify that summarize() is a separate function.

-## `summarize(options)`
+## Summarization with `ai()`
 
-Creates a text summarization.
+Creates a text summarization using an adapter.

76-89: Inconsistent section header with actual API usage.

Similar to the summarize section, the header says embedding(options) but the code uses ai({ adapter: openaiEmbed(), ... }).

-## `embedding(options)`
+## Embeddings with `ai()`
 
-Creates embeddings for text input.
+Creates embeddings for text input using an adapter.
packages/typescript/ai/src/index.ts (1)

74-98: Leaky public types: exporting InternalToolCallState from the root likely locks you into it.
Given the “Internal…” name, either (a) don’t export it from src/index.ts, or (b) mark it /** @internal */ and ensure the build actually strips/flags internal types consistently.

♻️ Duplicate comments (17)
examples/ts-group-chat/chat-server/chat-logic.ts (1)

108-108: Same array syntax consideration applies to method return types.

The same Type[]Array<Type> conversion appears in these method signatures, consistent with the interface changes above.

Also applies to: 112-112

packages/typescript/ai-ollama/src/adapters/text.ts (2)

199-222: Critical: Ollama's ToolCall lacks id property—type assertion hides runtime issue.

According to Ollama's v0.6.3 interface definition, ToolCall only contains function.name and function.arguments. The type assertion on lines 200-203 bypasses TypeScript's type checking but doesn't create the id property at runtime. Line 211 will access an undefined actualToolCall.id.

Use responseId as the tool call identifier instead:

 const handleToolCall = (toolCall: ToolCall): StreamChunk => {
-  const actualToolCall = toolCall as ToolCall & {
-    id: string
-    function: { index: number }
-  }
   return {
     type: 'tool_call',
     id: responseId,
     model: chunk.model,
     timestamp,
     toolCall: {
       type: 'function',
-      id: actualToolCall.id,
+      id: `${responseId}-tool-${Math.random().toString(36).substring(7)}`,
       function: {
-        name: actualToolCall.function.name || '',
+        name: toolCall.function.name || '',
         arguments:
-          typeof actualToolCall.function.arguments === 'string'
-            ? actualToolCall.function.arguments
-            : JSON.stringify(actualToolCall.function.arguments),
+          typeof toolCall.function.arguments === 'string'
+            ? toolCall.function.arguments
+            : JSON.stringify(toolCall.function.arguments),
       },
     },
-    index: actualToolCall.function.index,
+    index: 0,
   }
 }

366-385: systemPrompts not forwarded to Ollama.

The mapCommonOptionsToOllama function doesn't include options.systemPrompts in the returned ChatRequest. Other adapters (OpenAI, Anthropic, Gemini) prepend system prompts to the messages array. Ollama's API supports system role messages, so system prompts should be handled.

Prepend system messages when systemPrompts is provided:

 private mapCommonOptionsToOllama(options: TextOptions): ChatRequest {
   const model = options.model || this.defaultModel
   const providerOptions = options.providerOptions as
     | OllamaTextProviderOptions
     | undefined

   const ollamaOptions = {
     temperature: options.options?.temperature,
     top_p: options.options?.topP,
     num_predict: options.options?.maxTokens,
     ...providerOptions,
   }

+  // Prepend system message if systemPrompts provided
+  const messagesWithSystem = options.systemPrompts?.length
+    ? [
+        { role: 'system' as const, content: options.systemPrompts.join('\n') },
+        ...this.formatMessages(options.messages),
+      ]
+    : this.formatMessages(options.messages)
+
   return {
     model,
     options: ollamaOptions,
-    messages: this.formatMessages(options.messages),
+    messages: messagesWithSystem,
     tools: this.convertToolsToOllamaFormat(options.tools),
   }
 }
packages/typescript/ai-gemini/src/adapters/text.ts (6)

256-267: Remove dead chunk.data handling (likely not a real field on streamed Gemini responses).
This branch will either never run or mask future shape mismatches.


205-255: Tool call IDs can collide; always generate a unique toolCallId (don’t use functionCall.name as the id).
Same function can be called multiple times in a run; using the name as the id breaks correlation.

-            const toolCallId =
-              functionCall.name || `call_${Date.now()}_${nextToolIndex}`
+            const toolCallId = `call_${Date.now()}_${nextToolIndex}`
             const functionArgs = functionCall.args || {}
@@
-                const toolCallId =
-                  functionCall.name || `call_${Date.now()}_${nextToolIndex}`
+                const toolCallId = `call_${Date.now()}_${nextToolIndex}`
                 const functionArgs = functionCall.args || {}

Also applies to: 272-311


325-338: Token accounting: completionTokens should not come from thoughtsTokenCount.
thoughtsTokenCount is reasoning/thought tokens; completion text tokens should come from the response candidate tokens field.

Gemini `usageMetadata` fields: what do `candidatesTokenCount`, `promptTokenCount`, `totalTokenCount`, and `thoughtsTokenCount` mean, and which corresponds to completion/output tokens for billing/usage reporting?
-                completionTokens: chunk.usageMetadata.thoughtsTokenCount ?? 0,
+                completionTokens: chunk.usageMetadata.candidatesTokenCount ?? 0,

343-372: MIME type fallback is incorrect for non-image modalities.
Defaulting everything to 'image/jpeg' will break audio/video/document uploads.

-              mimeType: metadata?.mimeType ?? 'image/jpeg',
+              mimeType: metadata?.mimeType ?? 'application/octet-stream',
@@
-              mimeType: metadata?.mimeType ?? 'image/jpeg',
+              mimeType: metadata?.mimeType ?? 'application/octet-stream',

397-412: Don’t cast to Record<string, unknown> after JSON parse failure—recover safely.
A failed parse almost certainly means you have a string; the current cast risks runtime errors downstream.

           } catch {
-            parsedArgs = toolCall.function.arguments as unknown as Record<
-              string,
-              unknown
-            >
+            parsedArgs = {}
           }

423-432: functionResponse.name must be the tool/function name, not toolCallId.
Gemini expects the declared function name here; using an ID can break tool-result correlation.

Gemini function calling: what must `functionResponse.name` be set to when returning tool results (function name vs call id), and how is correlation handled?
-          functionResponse: {
-            name: msg.toolCallId,
+          functionResponse: {
+            name: msg.name ?? '',
             response: {
               content: msg.content || '',
             },
           },
packages/typescript/ai-openai/src/adapters/text.ts (2)

226-227: Either populate eventTypeCounts or remove it (currently it always logs empty counts).
Right now the log implies useful diagnostics, but nothing ever increments the map.

-    const eventTypeCounts = new Map<string, number>()
+    const eventTypeCounts = new Map<string, number>()
@@
       for await (const chunk of stream) {
+        eventTypeCounts.set(chunk.type, (eventTypeCounts.get(chunk.type) ?? 0) + 1)
         chunkCount++

Also applies to: 479-486


654-668: Base64 image inputs must be converted to a data URI for image_url (currently passing raw base64).
part.source.value for source.type === 'data' is base64, not a URL.

OpenAI Responses API `input_image.image_url`: does it accept raw base64, or must it be a URL / `data:<mime>;base64,<...>` data URI?
       case 'image': {
         const imageMetadata = part.metadata
         if (part.source.type === 'url') {
           return {
             type: 'input_image',
             image_url: part.source.value,
             detail: imageMetadata?.detail || 'auto',
           }
         }
-        // For base64 data, construct a data URI
+        // For base64 data, construct a data URI
         return {
           type: 'input_image',
-          image_url: part.source.value,
+          image_url: `data:image/jpeg;base64,${part.source.value}`,
           detail: imageMetadata?.detail || 'auto',
         }
       }
packages/typescript/ai/src/types.ts (2)

572-578: Clarify relationship between outputSchema and output fields.

This issue was flagged in a previous review. The TextOptions interface now has both output?: TOutput (line 571) and outputSchema?: z.ZodType (line 578). Consider adding documentation clarifying precedence rules or runtime validation.


794-798: Token usage field naming inconsistency.

This issue was flagged in a previous review. The usage object in ImageGenerationResult uses inputTokens/outputTokens, while other result types use promptTokens/completionTokens. Consider standardizing the naming convention.

packages/typescript/ai-anthropic/src/adapters/text.ts (4)

117-120: Error code may result in string "undefined".

This issue was flagged in a previous review. If both err.code and err.status are undefined, String(err.status) produces "undefined" as the error code.


261-270: Spread of validProviderOptions can override systemPrompts.

This issue was flagged in a previous review. The system field is set explicitly at line 267, but since 'system' is included in validKeys (line 232), it can be extracted from providerOptions and will override the joined systemPrompts when spread at line 269.


504-522: Use the index field from content_block_stop to identify which block actually stopped.

This issue was flagged in a previous review. The handler checks toolCallsMap.get(currentToolIndex) without verifying that the stopped block is actually a tool_use block using the event's index field.


593-596: Duplicate error code issue.

This issue was flagged in a previous review. Same issue as in chatStream: if both err.code and err.status are undefined, the code will be "undefined".

🧹 Nitpick comments (33)
docs/reference/functions/textOptions.md (1)

8-10: Doc generics: TModel appears unused in the shown signature—either show its role or drop it.
The snippet renders function textOptions<TAdapter, TModel>(options): ...TextStreamOptionsUnion<TAdapter>... (Line 9), but TModel isn’t referenced anywhere in the displayed types (Line 9, 20-22). If TModel is actually part of the real signature/constraints, consider reflecting that in the snippet/parameter type; otherwise, removing it would reduce confusion.

Also applies to: 14-23

docs/guides/tool-approval.md (1)

63-74: Consider a one-line note about auth/config expectations for openaiText() in server environments.
Right now the example will fail silently for readers unless they already know where the OpenAI API key is sourced from; a short note like “ensure your OpenAI API key is configured (e.g., env var) before calling openaiText()” would reduce support churn.

examples/ts-group-chat/chat-server/capnweb-rpc.ts (2)

8-13: Prefer unknown (or a real ToolCall shape) over any for ModelMessage.toolCalls.
If this is meant to model tool calls, Array<unknown> is a safer default than Array<any> until the structure is known.


61-62: clients and broadcastToAll() look inconsistent: callbacks are registered, but “broadcast” only queues messages.
If callbacks are intentional, consider invoking them (and only then counting “successful”); otherwise, consider renaming clients/removing the callback map to avoid confusion.

Also applies to: 89-141

packages/typescript/ai/src/activities/text/tools/zod-converter.ts (2)

9-16: Consider the fragility of internal API usage.

The _zod property is an internal Zod implementation detail that could change across versions. While this type guard may be necessary, be aware that it couples this code to Zod's internals and could break in future Zod updates.

Consider:

  • Adding a comment noting this is intentionally accessing internals
  • Checking if Zod provides a public type guard or utility (e.g., checking for common Zod methods like parse, safeParse)
  • Wrapping this in a try-catch when used, if version compatibility becomes an issue

198-201: Internal Zod API usage for object detection.

Similar to isZodSchema, this code accesses internal Zod structure (def.type). While necessary for the edge case handling, it creates maintenance risk if Zod's internal structure changes.

This is acceptable for now but consider consolidating all internal API checks into a single utility module with clear documentation about the version compatibility assumptions.

packages/typescript/ai-devtools/src/store/ai-context.tsx (1)

1309-1384: Consider subscribing to both chat:* and text:* events for backward compatibility.
Right now, devtools will miss usage events if an app/core emits the older chat:* names. A small compatibility shim keeps things resilient during the transition.

-    // ============= Chat Events (for usage tracking) =============
+    // ============= Text Events (for usage tracking) =============

-    cleanupFns.push(
-      aiEventClient.on('text:started', (e) => {
+    const onTextStarted = (e: any) => {
         const streamId = e.payload.streamId
         const model = e.payload.model
         const provider = e.payload.provider
         const clientId = e.payload.clientId
@@
-      }),
-    )
+    }
+
+    cleanupFns.push(aiEventClient.on('text:started', onTextStarted))
+    cleanupFns.push(aiEventClient.on('chat:started', onTextStarted))

-    cleanupFns.push(
-      aiEventClient.on('text:completed', (e) => {
+    const onTextCompleted = (e: any) => {
         const { requestId, usage } = e.payload
@@
-      }),
-    )
+    }
+
+    cleanupFns.push(aiEventClient.on('text:completed', onTextCompleted))
+    cleanupFns.push(aiEventClient.on('chat:completed', onTextCompleted))

-    cleanupFns.push(
-      aiEventClient.on('text:iteration', (e) => {
+    const onTextIteration = (e: any) => {
         const { requestId, iterationNumber } = e.payload
@@
-      }),
-    )
+    }
+
+    cleanupFns.push(aiEventClient.on('text:iteration', onTextIteration))
+    cleanupFns.push(aiEventClient.on('chat:iteration', onTextIteration))
examples/ts-group-chat/chat-server/claude-service.ts (1)

152-158: Consider removing the as any cast on messages.

The as any cast on line 155 bypasses type checking and may hide type mismatches between the conversation history and the expected message format. If there's a genuine type incompatibility, it would be better to address it explicitly.

-        messages: [...conversationHistory] as any,
+        messages: [...conversationHistory],

If this causes a type error, the underlying type mismatch should be resolved rather than suppressed.

examples/ts-group-chat/package.json (1)

17-20: Minor version inconsistency between router packages.

@tanstack/react-router is updated to ^1.141.1 while @tanstack/react-router-devtools remains at ^1.139.7. This may be intentional, but consider aligning versions if devtools compatibility is desired.

examples/ts-solid-chat/src/routes/api.chat.ts (1)

46-54: Consider propagating the request signal to abort controller.

requestSignal is captured on line 47 but only used for the early abort check. Consider linking it to abortController so that if the client disconnects mid-stream, the AI request is aborted.

        const abortController = new AbortController()
+
+       // Abort the AI request if the client disconnects
+       requestSignal?.addEventListener('abort', () => abortController.abort())
packages/typescript/ai-openai/src/utils/schema-converter.ts (1)

64-126: Nested object schemas may be skipped if type: 'object' is omitted
Inside makeOpenAIStructuredOutputCompatible, recursion is gated on prop.type === 'object', but some JSON Schemas represent objects via properties without an explicit type. Consider treating “has properties” as object-like.

 function makeOpenAIStructuredOutputCompatible(
   schema: Record<string, any>,
   originalRequired: Array<string> = [],
 ): Record<string, any> {
   const result = { ...schema }

   // Handle object types
-  if (result.type === 'object' && result.properties) {
+  const isObjectLike = result.type === 'object' || (!!result.properties && result.type == null)
+  if (isObjectLike && result.properties) {
+    result.type = 'object'
     const properties = { ...result.properties }
     const allPropertyNames = Object.keys(properties)

     // Transform each property
     for (const propName of allPropertyNames) {
       const prop = properties[propName]
       const wasOptional = !originalRequired.includes(propName)

       // Recursively transform nested objects/arrays
-      if (prop.type === 'object' && prop.properties) {
+      const propIsObjectLike =
+        prop?.type === 'object' || (!!prop?.properties && prop?.type == null)
+      if (propIsObjectLike && prop.properties) {
         properties[propName] = makeOpenAIStructuredOutputCompatible(
           prop,
           prop.required || [],
         )
       } else if (prop.type === 'array' && prop.items) {
         properties[propName] = {
           ...prop,
           items: makeOpenAIStructuredOutputCompatible(
             prop.items,
             prop.items.required || [],
           ),
         }
       } else if (wasOptional) {
         // Make optional fields nullable by adding null to the type
         if (prop.type && !Array.isArray(prop.type)) {
           properties[propName] = {
             ...prop,
             type: [prop.type, 'null'],
           }
         } else if (Array.isArray(prop.type) && !prop.type.includes('null')) {
           properties[propName] = {
             ...prop,
             type: [...prop.type, 'null'],
           }
         }
       }
     }
packages/typescript/ai/src/activities/embedding/adapter.ts (1)

66-68: generateId() randomness slice is inconsistent/low-quality vs other modules
substring(7) can yield very short strings. Consider matching the fixed-length pattern used elsewhere (e.g. slice(2, 9)) or using a shared helper.

   protected generateId(): string {
-    return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`
+    return `${this.name}-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`
   }
packages/typescript/ai/src/activities/summarize/index.ts (2)

155-181: Dispatch logic is fine, but the casts are a bit heavy
Not blocking, but consider overloads to avoid the as unknown as ... casts (keeps the public API nicer to maintain).


93-95: Consider reusing a shared createId() helper
This duplicates the same logic already present in the text activity (and now embedding has its own ID generation too).

packages/typescript/ai-ollama/src/utils/schema-converter.ts (1)

8-15: Note: isZodSchema relies on internal Zod API.

The _zod property check is pragmatic but relies on Zod's internal implementation. While this pattern is commonly used in the ecosystem, it could break if Zod changes its internals in future versions.

Consider adding a comment in the function documentation noting this dependency on Zod internals, or adding a runtime test to verify the check still works as expected.

packages/typescript/ai-gemini/src/utils/schema-converter.ts (1)

59-84: Schema normalization uses internal Zod properties.

The check at lines 60-63 accesses schema.def.type, which is another internal Zod property. While the defensive normalization logic ensures Gemini API compatibility, consider consolidating these checks:

   // Ensure object schemas always have type: "object"
   if (typeof result === 'object') {
-    const isZodObject =
-      typeof schema === 'object' &&
-      'def' in schema &&
-      schema.def.type === 'object'
-
-    if (isZodObject && !result.type) {
-      result.type = 'object'
-    }
-
-    if (Object.keys(result).length === 0) {
-      result.type = 'object'
-    }
-
-    if ('properties' in result && !result.type) {
+    // Infer object type from properties or empty schema
+    if (
+      !result.type &&
+      ('properties' in result || Object.keys(result).length === 0)
+    ) {
       result.type = 'object'
     }

     if (result.type === 'object' && !('properties' in result)) {
       result.properties = {}
     }

     if (result.type === 'object' && !('required' in result)) {
       result.required = []
     }
   }
examples/ts-vue-chat/vite.config.ts (1)

234-248: Type casts (as any) are pragmatic for dynamic provider selection.

The adapter as any and selectedModel as any casts at Lines 235-236 lose type safety but are understandable given the runtime provider switching. For a production codebase, consider a discriminated union approach, but this is acceptable for an example.

A more type-safe approach could use a helper that returns properly typed options:

function createAIOptions(provider: Provider, model: string | undefined, ...rest) {
  switch (provider) {
    case 'openai':
      return { adapter: openaiText(), model: model ?? 'gpt-4o', ...rest };
    // ... other cases
  }
}
packages/typescript/ai-ollama/src/ollama-adapter.ts (1)

398-405: URL-based images may not work reliably with Ollama.

The comment at Line 402-404 notes that URL-based images aren't directly supported by Ollama. Consider logging a warning or throwing an error for unsupported image sources rather than silently passing them through.

           } else {
-            // URL-based images not directly supported, but we pass the URL
-            // Ollama may need the image to be fetched externally
-            images.push(part.source.value)
+            // URL-based images require external fetching - warn user
+            console.warn(
+              `Ollama: URL-based images require external fetching. Consider using base64 data instead.`
+            )
+            images.push(part.source.value)
           }
packages/typescript/ai/src/ai.ts (4)

38-47: AnyAdapter naming is confusing next to AnyAIAdapter; consider renaming or removing the alias.
Exporting both AnyAdapter and AnyAIAdapter invites accidental imports and ambiguous docs/examples. If you need backwards compat, a more explicit alias name (e.g. AnyGenerateAdapter) is safer.


39-44: Reduce any in GenerateAdapter to preserve inference.
Using any for provider/model mappings in the union makes ai() inference weaker than it needs to be; consider plumbing the generic parameters through (or using unknown + constraints) so adapter-specific option typing is retained.


62-119: Type-level routing looks solid, but consider extracting per-kind helpers to keep this maintainable.
AIOptionsFor/AIReturnFor are correct-looking but will be hard to evolve as new adapter kinds/options land; splitting into TextAIOptionsFor, EmbeddingAIOptionsFor, etc. would reduce churn and improve readability.


252-265: Release hygiene: this is a major public API surface—please add a changeset and deprecation plan.
Given the new unified entrypoint + adapter split, shipping without a changeset will make downstream upgrades painful to track.

docs/adapters/openai.md (1)

59-79: Avoid hard-coding long model lists in docs (they’ll go stale fast).
Prefer linking to a “supported models” reference generated from model-meta (or a short “examples” list) to reduce ongoing maintenance.

packages/typescript/ai/src/activities/text/adapter.ts (1)

23-28: Consider renaming chatOptionstextOptions (or similar) for consistency with “Text” terminology.
This PR broadly renames “Chat” → “Text”; keeping “chatOptions” here is a small but pervasive inconsistency.

packages/typescript/ai/src/activities/embedding/index.ts (1)

68-70: Consider extracting createId to a shared utility.

This helper function is duplicated from packages/typescript/ai/src/activities/text/index.ts (lines 951-953). To maintain DRY principles and ensure consistent ID generation across activities, consider extracting this to a shared utility module.

-function createId(prefix: string): string {
-  return `${prefix}-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`
-}
+import { createId } from '../../utils/id.js'

Then create packages/typescript/ai/src/utils/id.ts:

export function createId(prefix: string): string {
  return `${prefix}-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`
}
packages/typescript/ai/src/activities/index.ts (2)

160-161: Consider stronger typing for ActivityHandler.

The any types here lose type safety. While this flexibility is needed for the runtime map, consider documenting why or adding a comment explaining this trade-off.

-/** Type for activity handler functions */
-type ActivityHandler = (options: any) => any
+/**
+ * Type for activity handler functions.
+ * Uses `any` intentionally as this is a runtime dispatch mechanism
+ * where type narrowing happens at the call site via adapter.kind.
+ */
+type ActivityHandler = (options: unknown) => unknown

490-515: Duplicate deprecated type aliases.

These type aliases (lines 490-515) duplicate the functionality of the aliases at lines 301-327 with slightly different naming. Consider consolidating to one set to reduce maintenance overhead.

Lines 301-327: GenerateTextOptions, GenerateEmbeddingOptions, etc.
Lines 490-515: TextGenerateOptions, EmbeddingGenerateOptions, etc.

Both point to the same underlying *ActivityOptions types.

Either remove one set or have one reference the other:

 /** @deprecated Use TextActivityOptions */
 export type TextGenerateOptions<
   TAdapter extends TextAdapter<ReadonlyArray<string>, object, any, any, any>,
   TModel extends TextModels<TAdapter>,
   TSchema extends z.ZodType | undefined = undefined,
   TStream extends boolean = true,
-> = TextActivityOptions<TAdapter, TModel, TSchema, TStream>
+> = GenerateTextOptions<TAdapter, TModel, TSchema, TStream>
packages/typescript/ai/src/activities/text/index.ts (3)

1037-1068: Heavy type assertions may mask type errors.

The multiple as unknown as casts bypass TypeScript's type checking. While these may be necessary due to complex generic constraints, they could mask type errors if the runtime values don't match expectations.

Consider adding runtime assertions or restructuring the generics to reduce the need for double-casting:

// Example: Add runtime assertion
if (outputSchema) {
  if (typeof adapter.structuredOutput !== 'function') {
    throw new Error('Adapter does not support structured output')
  }
  return runAgenticStructuredOutput(options)
}

1155-1158: Stream consumption for side effects.

The empty loop body consumes the stream purely for its side effects (executing the agentic loop). This is intentional and correct, but the comment could be more explicit about why this is necessary.

   // Consume the stream to run the agentic loop
   for await (const _chunk of engine.run()) {
-    // Just consume the stream to execute the agentic loop
+    // Intentionally discarding chunks - we only need the side effects
+    // (tool execution, message accumulation) for structured output
   }

1211-1212: Consider using TextAdapter instead of AIAdapter.

The textOptions helper uses AIAdapter in its generic constraint while the rest of the text activity module uses TextAdapter. For consistency and to ensure only text-capable adapters are accepted, consider using TextAdapter.

 export function textOptions<
-  TAdapter extends AIAdapter<any, any, any, any, any>,
+  TAdapter extends TextAdapter<any, any, any, any, any>,
   const TModel extends TAdapter extends AIAdapter<

This would also require updating the subsequent type references to use TextAdapter patterns.

packages/typescript/ai/src/index.ts (3)

1-16: Public API naming is getting redundant/confusing (GenerateOptions vs TextGenerateOptions vs GenerateTextOptions).
Consider consolidating to one naming scheme (and/or adding /** @deprecated */ aliases) to reduce long-term API entropy.


19-32: Avoid exporting from deep “activities/…” paths if these are meant to be stable public APIs.
Re-exporting ./activities/text/tools/... bakes internal folder structure into the public surface; prefer a public barrel like ./tools (or ./text/tools) to keep refactors non-breaking.


44-45: ToolCallManager being public: double-check this is intended “API surface”, not an implementation detail.
If it’s internal plumbing, consider not exporting it from the package root (or clearly documenting it as advanced/unstable).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 935111d and 96a08b1.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (107)
  • docs/adapters/anthropic.md (6 hunks)
  • docs/adapters/gemini.md (5 hunks)
  • docs/adapters/ollama.md (5 hunks)
  • docs/adapters/openai.md (9 hunks)
  • docs/api/ai.md (10 hunks)
  • docs/config.json (3 hunks)
  • docs/getting-started/overview.md (3 hunks)
  • docs/getting-started/quick-start.md (4 hunks)
  • docs/guides/agentic-cycle.md (2 hunks)
  • docs/guides/client-tools.md (3 hunks)
  • docs/guides/multimodal-content.md (6 hunks)
  • docs/guides/per-model-type-safety.md (2 hunks)
  • docs/guides/server-tools.md (2 hunks)
  • docs/guides/streaming.md (2 hunks)
  • docs/guides/tool-approval.md (1 hunks)
  • docs/guides/tool-architecture.md (1 hunks)
  • docs/guides/tools.md (4 hunks)
  • docs/protocol/http-stream-protocol.md (3 hunks)
  • docs/protocol/sse-protocol.md (2 hunks)
  • docs/reference/classes/BaseAdapter.md (1 hunks)
  • docs/reference/functions/text.md (1 hunks)
  • docs/reference/functions/textOptions.md (2 hunks)
  • docs/reference/index.md (3 hunks)
  • docs/reference/interfaces/AIAdapter.md (1 hunks)
  • docs/reference/interfaces/TextCompletionChunk.md (1 hunks)
  • docs/reference/interfaces/TextOptions.md (1 hunks)
  • docs/reference/type-aliases/TextStreamOptionsForModel.md (1 hunks)
  • docs/reference/type-aliases/TextStreamOptionsUnion.md (1 hunks)
  • examples/README.md (2 hunks)
  • examples/ts-group-chat/chat-server/capnweb-rpc.ts (7 hunks)
  • examples/ts-group-chat/chat-server/chat-logic.ts (2 hunks)
  • examples/ts-group-chat/chat-server/claude-service.ts (3 hunks)
  • examples/ts-group-chat/package.json (2 hunks)
  • examples/ts-react-chat/package.json (3 hunks)
  • examples/ts-react-chat/src/routes/api.tanchat.ts (4 hunks)
  • examples/ts-solid-chat/package.json (4 hunks)
  • examples/ts-solid-chat/src/routes/api.chat.ts (2 hunks)
  • examples/ts-svelte-chat/package.json (1 hunks)
  • examples/ts-svelte-chat/src/routes/api/chat/+server.ts (2 hunks)
  • examples/ts-vue-chat/package.json (1 hunks)
  • examples/ts-vue-chat/vite.config.ts (3 hunks)
  • examples/vanilla-chat/package.json (1 hunks)
  • package.json (1 hunks)
  • packages/typescript/ai-anthropic/package.json (1 hunks)
  • packages/typescript/ai-anthropic/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/anthropic-adapter.ts (3 hunks)
  • packages/typescript/ai-anthropic/src/tools/custom-tool.ts (2 hunks)
  • packages/typescript/ai-anthropic/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-anthropic/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts (3 hunks)
  • packages/typescript/ai-client/package.json (1 hunks)
  • packages/typescript/ai-devtools/package.json (1 hunks)
  • packages/typescript/ai-devtools/src/store/ai-context.tsx (3 hunks)
  • packages/typescript/ai-devtools/vite.config.ts (1 hunks)
  • packages/typescript/ai-gemini/package.json (1 hunks)
  • packages/typescript/ai-gemini/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-gemini/src/gemini-adapter.ts (3 hunks)
  • packages/typescript/ai-gemini/src/tools/tool-converter.ts (2 hunks)
  • packages/typescript/ai-gemini/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (6 hunks)
  • packages/typescript/ai-ollama/package.json (1 hunks)
  • packages/typescript/ai-ollama/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-ollama/src/ollama-adapter.ts (5 hunks)
  • packages/typescript/ai-ollama/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-ollama/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-openai/package.json (1 hunks)
  • packages/typescript/ai-openai/src/adapters/text.ts (1 hunks)
  • packages/typescript/ai-openai/src/openai-adapter.ts (5 hunks)
  • packages/typescript/ai-openai/src/tools/function-tool.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/utils/schema-converter.ts (1 hunks)
  • packages/typescript/ai-openai/tests/openai-adapter.test.ts (2 hunks)
  • packages/typescript/ai-react-ui/package.json (1 hunks)
  • packages/typescript/ai-react/package.json (1 hunks)
  • packages/typescript/ai-solid-ui/package.json (1 hunks)
  • packages/typescript/ai-svelte/package.json (1 hunks)
  • packages/typescript/ai-vue-ui/package.json (1 hunks)
  • packages/typescript/ai/package.json (1 hunks)
  • packages/typescript/ai/src/activities/embedding/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/embedding/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/image/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/image/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/summarize/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/summarize/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/agent-loop-strategies.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/index.ts (10 hunks)
  • packages/typescript/ai/src/activities/text/messages.ts (4 hunks)
  • packages/typescript/ai/src/activities/text/stream/message-updaters.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/stream/processor.ts (2 hunks)
  • packages/typescript/ai/src/activities/text/stream/types.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/tools/tool-calls.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/tools/tool-definition.ts (1 hunks)
  • packages/typescript/ai/src/activities/text/tools/zod-converter.ts (1 hunks)
  • packages/typescript/ai/src/ai.ts (1 hunks)
  • packages/typescript/ai/src/base-adapter.ts (2 hunks)
  • packages/typescript/ai/src/core/chat-common-options.ts (0 hunks)
  • packages/typescript/ai/src/core/embedding.ts (0 hunks)
  • packages/typescript/ai/src/core/summarize.ts (0 hunks)
  • packages/typescript/ai/src/event-client.ts (4 hunks)
  • packages/typescript/ai/src/index.ts (4 hunks)
  • packages/typescript/ai/src/stream-to-response.ts (4 hunks)
  • packages/typescript/ai/src/tools/zod-converter.ts (0 hunks)
  • packages/typescript/ai/src/types.ts (11 hunks)
  • packages/typescript/ai/src/utilities/chat-options.ts (0 hunks)
⛔ Files not processed due to max files limit (36)
  • packages/typescript/ai/src/utilities/messages.ts
  • packages/typescript/ai/tests/agent-loop-strategies.test.ts
  • packages/typescript/ai/tests/ai-abort.test.ts
  • packages/typescript/ai/tests/ai-text.test.ts
  • packages/typescript/ai/tests/generate-types.test-d.ts
  • packages/typescript/ai/tests/generate.test.ts
  • packages/typescript/ai/tests/message-updaters.test.ts
  • packages/typescript/ai/tests/strategies.test.ts
  • packages/typescript/ai/tests/stream-processor-edge-cases.test.ts
  • packages/typescript/ai/tests/stream-processor-replay.test.ts
  • packages/typescript/ai/tests/stream-processor.test.ts
  • packages/typescript/ai/tests/stream-to-response.test.ts
  • packages/typescript/ai/tests/tool-call-manager.test.ts
  • packages/typescript/ai/tests/tool-definition.test.ts
  • packages/typescript/ai/tests/zod-converter.test.ts
  • packages/typescript/ai/vite.config.ts
  • packages/typescript/react-ai-devtools/package.json
  • packages/typescript/smoke-tests/adapters/.env.example
  • packages/typescript/smoke-tests/adapters/package.json
  • packages/typescript/smoke-tests/adapters/src/adapters/index.ts
  • packages/typescript/smoke-tests/adapters/src/cli.ts
  • packages/typescript/smoke-tests/adapters/src/harness.ts
  • packages/typescript/smoke-tests/adapters/src/index.ts
  • packages/typescript/smoke-tests/adapters/src/tests/ags-agentic-structured.ts
  • packages/typescript/smoke-tests/adapters/src/tests/apr-approval-flow.ts
  • packages/typescript/smoke-tests/adapters/src/tests/cst-chat-stream.ts
  • packages/typescript/smoke-tests/adapters/src/tests/emb-embedding.ts
  • packages/typescript/smoke-tests/adapters/src/tests/img-image-generation.ts
  • packages/typescript/smoke-tests/adapters/src/tests/index.ts
  • packages/typescript/smoke-tests/adapters/src/tests/ost-one-shot-text.ts
  • packages/typescript/smoke-tests/adapters/src/tests/str-structured-output.ts
  • packages/typescript/smoke-tests/adapters/src/tests/sum-summarize.ts
  • packages/typescript/smoke-tests/adapters/src/tests/tls-tool-server.ts
  • packages/typescript/smoke-tests/e2e/package.json
  • packages/typescript/smoke-tests/e2e/src/routes/api.tanchat.ts
  • packages/typescript/solid-ai-devtools/package.json
💤 Files with no reviewable changes (5)
  • packages/typescript/ai/src/core/chat-common-options.ts
  • packages/typescript/ai/src/utilities/chat-options.ts
  • packages/typescript/ai/src/tools/zod-converter.ts
  • packages/typescript/ai/src/core/summarize.ts
  • packages/typescript/ai/src/core/embedding.ts
✅ Files skipped from review due to trivial changes (5)
  • packages/typescript/ai-react-ui/package.json
  • docs/reference/functions/text.md
  • packages/typescript/ai/src/activities/text/tools/tool-calls.ts
  • docs/reference/interfaces/TextCompletionChunk.md
  • examples/ts-vue-chat/package.json
🚧 Files skipped from review as they are similar to previous changes (4)
  • packages/typescript/ai-anthropic/src/utils/index.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-openai/src/utils/index.ts
  • packages/typescript/ai-gemini/src/utils/index.ts
🧰 Additional context used
🧬 Code graph analysis (19)
packages/typescript/ai/src/activities/image/index.ts (1)
packages/typescript/ai/src/types.ts (1)
  • ImageGenerationResult (786-799)
packages/typescript/ai/src/base-adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TextOptions (557-598)
  • StreamChunk (688-696)
packages/typescript/ai-gemini/src/gemini-adapter.ts (1)
packages/typescript/ai/src/types.ts (1)
  • TextOptions (557-598)
examples/ts-group-chat/chat-server/chat-logic.ts (1)
examples/ts-group-chat/src/hooks/useChatMessages.ts (1)
  • ChatMessage (4-10)
packages/typescript/ai-openai/src/openai-adapter.ts (1)
packages/typescript/ai-openai/src/text/text-provider-options.ts (1)
  • validateTextProviderOptions (320-325)
packages/typescript/ai/src/activities/text/messages.ts (1)
packages/typescript/ai/src/types.ts (2)
  • Modality (92-92)
  • ConstrainedModelMessage (300-316)
packages/typescript/ai-devtools/src/store/ai-context.tsx (2)
packages/typescript/ai/src/event-client.ts (1)
  • aiEventClient (309-309)
packages/typescript/ai/src/index.ts (1)
  • aiEventClient (61-61)
packages/typescript/ai-ollama/src/adapters/text.ts (2)
packages/typescript/ai-ollama/src/utils/index.ts (4)
  • createOllamaClient (2-2)
  • convertZodToOllamaSchema (8-8)
  • generateId (4-4)
  • getOllamaHostFromEnv (5-5)
packages/typescript/ai-ollama/src/utils/client.ts (3)
  • createOllamaClient (10-14)
  • generateId (39-41)
  • getOllamaHostFromEnv (20-34)
packages/typescript/ai/src/activities/summarize/index.ts (2)
packages/typescript/ai/src/types.ts (1)
  • StreamChunk (688-696)
packages/typescript/ai/src/activities/text/index.ts (1)
  • createId (952-954)
packages/typescript/ai-anthropic/src/anthropic-adapter.ts (1)
packages/typescript/ai/src/types.ts (1)
  • TextOptions (557-598)
packages/typescript/ai/src/types.ts (1)
packages/typescript/ai/src/ai.ts (3)
  • TextAdapter (239-239)
  • EmbeddingAdapter (240-240)
  • SummarizeAdapter (241-241)
packages/typescript/ai/src/activities/embedding/index.ts (1)
packages/typescript/ai/src/activities/text/index.ts (1)
  • createId (952-954)
packages/typescript/ai/src/activities/text/adapter.ts (1)
packages/typescript/ai/src/types.ts (4)
  • TextOptions (557-598)
  • Modality (92-92)
  • DefaultMessageMetadataByModality (805-811)
  • StreamChunk (688-696)
packages/typescript/ai-openai/tests/openai-adapter.test.ts (2)
packages/typescript/ai/src/ai.ts (1)
  • ai (223-232)
packages/typescript/ai/src/index.ts (1)
  • ai (3-3)
packages/typescript/ai/src/activities/text/tools/zod-converter.ts (1)
packages/typescript/ai/src/types.ts (1)
  • SchemaInput (62-62)
packages/typescript/ai-anthropic/src/adapters/text.ts (4)
packages/typescript/ai-anthropic/src/utils/index.ts (4)
  • AnthropicClientConfig (5-5)
  • createAnthropicClient (2-2)
  • generateId (3-3)
  • getAnthropicApiKeyFromEnv (4-4)
packages/typescript/ai-anthropic/src/utils/client.ts (4)
  • AnthropicClientConfig (3-5)
  • createAnthropicClient (10-16)
  • generateId (43-45)
  • getAnthropicApiKeyFromEnv (22-38)
packages/typescript/ai-anthropic/src/model-meta.ts (1)
  • ANTHROPIC_MODELS (363-373)
packages/typescript/ai/src/types.ts (1)
  • ModelMessage (229-240)
packages/typescript/ai-ollama/src/ollama-adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TextOptions (557-598)
  • StreamChunk (688-696)
packages/typescript/ai-openai/src/adapters/text.ts (3)
packages/typescript/ai-openai/src/utils/index.ts (4)
  • OpenAIClientConfig (5-5)
  • createOpenAIClient (2-2)
  • generateId (4-4)
  • getOpenAIApiKeyFromEnv (3-3)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_CHAT_MODELS (1608-1653)
packages/typescript/ai/src/types.ts (4)
  • TextOptions (557-598)
  • StreamChunk (688-696)
  • ModelMessage (229-240)
  • ContentPart (168-179)
packages/typescript/ai/src/activities/index.ts (1)
packages/typescript/ai/src/types.ts (4)
  • StreamChunk (688-696)
  • ImageGenerationResult (786-799)
  • ConstrainedModelMessage (300-316)
  • TextOptions (557-598)
🪛 Biome (2.1.2)
packages/typescript/ai-gemini/src/tools/tool-converter.ts

[error] 80-82: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🪛 LanguageTool
docs/adapters/ollama.md

[grammar] ~126-~126: Use a hyphen to join words.
Context: ...l, and qwen2` generally have good tool calling support. ## Provider Options O...

(QB_NEW_EN_HYPHEN)

```bash
ollama serve
```
**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hyphenate “tool-calling” (LanguageTool)

-**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.
+**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool-calling support.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.
**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool-calling support.
🧰 Tools
🪛 LanguageTool

[grammar] ~126-~126: Use a hyphen to join words.
Context: ...l, and qwen2` generally have good tool calling support. ## Provider Options O...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
In docs/adapters/ollama.md around line 126, the phrase "tool calling support"
should be hyphenated to "tool-calling support" to match recommended compound
adjective style; update the sentence so models like `llama3`, `mistral`, and
`qwen2` generally have good tool-calling support.

Comment on lines 135 to 149
```typescript
const stream = chat({
adapter: openai(),
const stream = ai({
adapter: openaiText(),
messages,
model: "gpt-4o",
providerOptions: {
temperature: 0.7,
maxTokens: 1000,
topP: 0.9,
frequencyPenalty: 0.5,
presencePenalty: 0.5,
max_tokens: 1000,
top_p: 0.9,
frequency_penalty: 0.5,
presence_penalty: 0.5,
stop: ["END"],
},
});
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for openaiText adapter definition
fd -t f -e ts -e js | xargs rg -l "openaiText" | head -20

Repository: TanStack/ai

Length of output: 564


🏁 Script executed:

#!/bin/bash
# Find adapter files in the codebase
fd -t d "adapter" | head -20

Repository: TanStack/ai

Length of output: 345


🏁 Script executed:

#!/bin/bash
# Search for providerOptions type definitions
rg "providerOptions" -A 5 -B 5 | head -100

Repository: TanStack/ai

Length of output: 7346


🌐 Web query:

OpenAI API parameter names: max_tokens vs max_output_tokens difference between Chat Completions and Responses API

💡 Result:

Short answer

  • Chat Completions API: historically uses max_tokens (now deprecated); newer chat reasoning models use max_completion_tokens to cap generated tokens. [1][3]
  • Responses API: uses max_output_tokens to cap generated tokens (this is the Responses API parameter for GPT-5 and o‑series models). [2][3]

Key practical differences

  • Name and API: max_tokens / max_completion_tokens → Chat Completions; max_output_tokens → Responses API. [1][2]
  • Model compatibility: some newer “o‑series” / GPT‑5 models expect max_output_tokens (Responses API); older chat endpoints accept max_completion_tokens (and legacy max_tokens). [1][2]
  • Behavior: all these parameters limit only the output tokens the model may generate (input + output must still fit model context). Chat Completions historically supports multiple completions (n); Responses API does not. [1][2][4]

Sources

  • OpenAI Chat/Completions reference (max_tokens deprecated, max_completion_tokens present). [1]
  • OpenAI Help Center “Controlling length” (Responses API uses max_output_tokens; notes about chat vs responses differences). [2]
  • OpenAI parameter details / token counting (input+output count toward context). [4]

References
[1] OpenAI API Reference — Chat Completions.
[2] OpenAI Help Center — Controlling the length of OpenAI model responses.
[3] OpenAI API Reference — Completions / Advanced parameter details.


🏁 Script executed:

#!/bin/bash
# Read the openaiText adapter implementation
cat -n packages/typescript/ai-openai/src/adapters/text.ts | head -100

Repository: TanStack/ai

Length of output: 3809


🏁 Script executed:

#!/bin/bash
# Search for provider options type definitions in OpenAI adapter
rg -A 20 "providerOptions" packages/typescript/ai-openai/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1570


🏁 Script executed:

#!/bin/bash
# Check the OpenAI adapter index to understand exports
cat -n packages/typescript/ai-openai/src/index.ts | head -80

Repository: TanStack/ai

Length of output: 2701


🏁 Script executed:

#!/bin/bash
# Search for max_tokens or max_output_tokens in OpenAI adapter
rg "max_tokens|max_output_tokens|max_completion_tokens" packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 4450


🏁 Script executed:

#!/bin/bash
# Read the text provider options file to see all valid parameters
cat -n packages/typescript/ai-openai/src/text/text-provider-options.ts

Repository: TanStack/ai

Length of output: 18555


🏁 Script executed:

#!/bin/bash
# Check the Responses API documentation comment or types in the adapter
rg -B 5 -A 15 "frequency_penalty|presence_penalty" packages/typescript/ai-openai/src/text/text-provider-options.ts

Repository: TanStack/ai

Length of output: 37


Fix providerOptions parameters to match OpenAI Responses API (not Chat Completions).

The example uses Chat Completions API parameters that are invalid for the Responses API:

  • max_tokens → should be max_output_tokens
  • frequency_penalty, presence_penalty, stop → not supported by Responses API

The underlying adapter calls client.responses.create(), which expects Responses API parameter names. Valid parameters include: temperature, top_p, max_output_tokens, tool_choice, parallel_tool_calls, and others, but NOT the Chat Completions parameters shown.

🤖 Prompt for AI Agents
In docs/adapters/openai.md around lines 135 to 149, the sample providerOptions
uses Chat Completions parameters that are invalid for the OpenAI Responses API;
replace max_tokens with max_output_tokens and remove unsupported keys
(frequency_penalty, presence_penalty, stop), and if needed add valid Responses
API options such as temperature, top_p, max_output_tokens, tool_choice, or
parallel_tool_calls to match client.responses.create() parameter names; update
the example providerOptions to only include supported Responses API fields.

Comment on lines +334 to +346
// --- Structured response with tools
import { toolDefinition } from "@tanstack/ai";
const weatherTool = toolDefinition({
name: "getWeather",
description: "Get the current weather for a city",
parameters: {
city: { type: "string", description: "City name" },
},
async execute({ city }) {
// Implementation that fetches weather info
return { temperature: 72, condition: "Sunny" };
},
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent toolDefinition property: parameters vs inputSchema.

This example uses parameters (Line 339), but the toolDefinition example at Lines 109-118 uses inputSchema. Ensure consistency across examples.

 const weatherTool = toolDefinition({
   name: "getWeather",
   description: "Get the current weather for a city",
-  parameters: {
-    city: { type: "string", description: "City name" },
-  },
+  inputSchema: z.object({
+    city: z.string().describe("City name"),
+  }),
   async execute({ city }) {
     // Implementation that fetches weather info
     return { temperature: 72, condition: "Sunny" };
   },
 });

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In docs/api/ai.md around lines 334 to 346, the example uses a toolDefinition
property named "parameters" while earlier examples use "inputSchema"; change
this snippet to use the same property name ("inputSchema") and structure as the
other example: replace the parameters block with an inputSchema that defines the
city string parameter, and update any references in the execute signature to
match the inputSchema shape so examples are consistent across the doc.

Comment on lines +176 to 178
import { ai } from '@tanstack/ai';
import { openaiText } from '@tanstack/ai-openai';

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

NDJSON error chunk shape is inconsistent with the document’s own examples.

Earlier examples include id, model, timestamp on chunks, but the server snippets’ errorChunk only includes { type, error }. Either update the snippet to include the same envelope fields, or explicitly state that error chunks are allowed to be minimal.

Suggested doc-snippet adjustment:

-        const errorChunk = {
-          type: 'error',
-          error: {
-            message: error.message || 'Unknown error',
-            code: error.code,
-          },
-        };
+        const errorChunk = {
+          type: 'error',
+          id: 'msg_1',
+          model: 'gpt-4o',
+          timestamp: Date.now(),
+          error: {
+            message: error?.message || 'Unknown error',
+            code: error?.code,
+          },
+        };

Also applies to: 183-187, 225-227, 239-243

🤖 Prompt for AI Agents
In docs/protocol/http-stream-protocol.md around lines 176-178 (and also apply
same change to 183-187, 225-227, 239-243): the NDJSON error chunk examples are
inconsistent with the earlier chunk envelope (which includes id, model,
timestamp). Either change each server-side errorChunk snippet to include the
same envelope fields (e.g., id, model, timestamp plus type and error) so the
shape matches the document’s examples, or add a brief note above the snippets
explicitly stating that error chunks may be minimal and do not include the
envelope fields; make the chosen approach consistent across all listed ranges.

Comment on lines +2 to 16
id: TextStreamOptionsForModel
title: TextStreamOptionsForModel
---

# Type Alias: ChatStreamOptionsForModel\<TAdapter, TModel\>
# Type Alias: TextStreamOptionsForModel\<TAdapter, TModel\>

```ts
type ChatStreamOptionsForModel<TAdapter, TModel> = TAdapter extends AIAdapter<any, any, any, any, infer ModelProviderOptions, infer ModelInputModalities, infer MessageMetadata> ? Omit<ChatOptions, "model" | "providerOptions" | "responseFormat" | "messages"> & object : never;
type TextStreamOptionsForModel<TAdapter, TModel> = TAdapter extends AIAdapter<any, any, any, any, infer ModelProviderOptions, infer ModelInputModalities, infer MessageMetadata> ? Omit<TextOptions, "model" | "providerOptions" | "responseFormat" | "messages"> & object : never;
```

Defined in: [types.ts:883](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L883)

Chat options constrained by a specific model's capabilities.
Unlike ChatStreamOptionsUnion which creates a union over all models,
Unlike TextStreamOptionsUnion which creates a union over all models,
this type takes a specific model and constrains messages accordingly.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Description still says “Chat options” after rename to TextStreamOptionsForModel.

-Chat options constrained by a specific model's capabilities.
+Text options constrained by a specific model's capabilities.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: TextStreamOptionsForModel
title: TextStreamOptionsForModel
---
# Type Alias: ChatStreamOptionsForModel\<TAdapter, TModel\>
# Type Alias: TextStreamOptionsForModel\<TAdapter, TModel\>
```ts
type ChatStreamOptionsForModel<TAdapter, TModel> = TAdapter extends AIAdapter<any, any, any, any, infer ModelProviderOptions, infer ModelInputModalities, infer MessageMetadata> ? Omit<ChatOptions, "model" | "providerOptions" | "responseFormat" | "messages"> & object : never;
type TextStreamOptionsForModel<TAdapter, TModel> = TAdapter extends AIAdapter<any, any, any, any, infer ModelProviderOptions, infer ModelInputModalities, infer MessageMetadata> ? Omit<TextOptions, "model" | "providerOptions" | "responseFormat" | "messages"> & object : never;
```
Defined in: [types.ts:883](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L883)
Chat options constrained by a specific model's capabilities.
Unlike ChatStreamOptionsUnion which creates a union over all models,
Unlike TextStreamOptionsUnion which creates a union over all models,
this type takes a specific model and constrains messages accordingly.
id: TextStreamOptionsForModel
title: TextStreamOptionsForModel
---
# Type Alias: TextStreamOptionsForModel\<TAdapter, TModel\>
🤖 Prompt for AI Agents
docs/reference/type-aliases/TextStreamOptionsForModel.md around lines 2 to 16:
the file header and paragraph still refer to "Chat options" after the type was
renamed; update the descriptive text to say "Text stream options" (or "Text
options") and adjust the comparison sentence to refer to TextStreamOptionsUnion
(keep the same intent: this type constrains text stream/messages for a specific
model rather than creating a union over all models). Ensure any leftover “chat”
wording is replaced and the description matches the type name.

Comment on lines +186 to +228
async function runSummarize(
options: SummarizeActivityOptions<
SummarizeAdapter<ReadonlyArray<string>, object>,
string,
false
>,
): Promise<SummarizationResult> {
const { adapter, model, text, maxLength, style, focus } = options
const requestId = createId('summarize')
const inputLength = text.length
const startTime = Date.now()

aiEventClient.emit('summarize:started', {
requestId,
model,
inputLength,
timestamp: startTime,
})

const summarizeOptions: SummarizationOptions = {
model,
text,
maxLength,
style,
focus,
}

const result = await adapter.summarize(summarizeOptions)

const duration = Date.now() - startTime
const outputLength = result.summary.length

aiEventClient.emit('summarize:completed', {
requestId,
model,
inputLength,
outputLength,
duration,
timestamp: Date.now(),
})

return result
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Streaming path skips summarize:* telemetry; add parity with non-streaming
Today, stream: true won’t emit summarize:started / summarize:completed, which makes metrics incomplete.

 async function* runStreamingSummarize(
   options: SummarizeActivityOptions<
     SummarizeAdapter<ReadonlyArray<string>, object>,
     string,
     true
   >,
 ): AsyncIterable<StreamChunk> {
   const { adapter, model, text, maxLength, style, focus } = options
+  const requestId = createId('summarize')
+  const inputLength = text.length
+  const startTime = Date.now()
+
+  aiEventClient.emit('summarize:started', {
+    requestId,
+    model,
+    inputLength,
+    timestamp: startTime,
+  })

   const summarizeOptions: SummarizationOptions = {
     model,
     text,
     maxLength,
     style,
     focus,
   }

   const result = await adapter.summarize(summarizeOptions)
+  const duration = Date.now() - startTime
+  const outputLength = result.summary.length
+
+  aiEventClient.emit('summarize:completed', {
+    requestId,
+    model,
+    inputLength,
+    outputLength,
+    duration,
+    timestamp: Date.now(),
+  })

   // Yield content chunk with the summary
   yield {
     type: 'content',
     id: result.id,
     model: result.model,
     timestamp: Date.now(),
     delta: result.summary,
     content: result.summary,
     role: 'assistant',
   }

Also applies to: 234-273

🤖 Prompt for AI Agents
In packages/typescript/ai/src/activities/summarize/index.ts around lines 186-228
(and similarly for the streaming implementation at 234-273), the streaming path
does not emit the summarize:started and summarize:completed telemetry events;
update the streaming branch to mirror the non-streaming flow by: emit
summarize:started before processing (including requestId, model, inputLength,
timestamp), record startTime, call the adapter stream API, capture
outputLength/duration when the stream completes (or on final chunk), then emit
summarize:completed with requestId, model, inputLength, outputLength, duration,
timestamp, and finally return the same SummarizationResult shape; ensure
requestId is created the same way as non-streaming and errors also emit/handle
telemetry consistently.

Comment on lines +109 to +163
export abstract class BaseTextAdapter<
TModels extends ReadonlyArray<string> = ReadonlyArray<string>,
TProviderOptions extends object = Record<string, unknown>,
TModelProviderOptionsByName extends Record<string, unknown> = Record<
string,
unknown
>,
TModelInputModalitiesByName extends Record<string, ReadonlyArray<Modality>> =
Record<string, ReadonlyArray<Modality>>,
TMessageMetadataByModality extends {
text: unknown
image: unknown
audio: unknown
video: unknown
document: unknown
} = DefaultMessageMetadataByModality,
> implements TextAdapter<
TModels,
TProviderOptions,
TModelProviderOptionsByName,
TModelInputModalitiesByName,
TMessageMetadataByModality
> {
readonly kind = 'text' as const
abstract readonly name: string
abstract readonly models: TModels

// Type-only properties - never assigned at runtime
declare _providerOptions?: TProviderOptions
declare _modelProviderOptionsByName?: TModelProviderOptionsByName
declare _modelInputModalitiesByName?: TModelInputModalitiesByName
declare _messageMetadataByModality?: TMessageMetadataByModality

protected config: TextAdapterConfig

constructor(config: TextAdapterConfig = {}) {
this.config = config
}

abstract chatStream(
options: TextOptions<string, TProviderOptions>,
): AsyncIterable<StreamChunk>

/**
* Generate structured output using the provider's native structured output API.
* Concrete implementations should override this to use provider-specific structured output.
*/
abstract structuredOutput(
options: StructuredOutputOptions<TProviderOptions>,
): Promise<StructuredOutputResult<unknown>>

protected generateId(): string {
return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine TextAdapterConfig
fd -e ts -e js | xargs rg "TextAdapterConfig" -A 5 -B 2

Repository: TanStack/ai

Length of output: 2577


🏁 Script executed:

# Let's also search for concrete implementations of BaseTextAdapter
rg "extends BaseTextAdapter|implements TextAdapter" -A 10 -B 2

Repository: TanStack/ai

Length of output: 9430


🏁 Script executed:

# Check the file structure to understand the context better
git ls-files | grep -E "(adapter|config)" | head -20

Repository: TanStack/ai

Length of output: 713


🏁 Script executed:

# Look at OpenAITextAdapter implementation
cat -n packages/typescript/ai-openai/src/adapters/text.ts | head -100

Repository: TanStack/ai

Length of output: 3809


🏁 Script executed:

# Look at GeminiTextAdapter implementation
cat -n packages/typescript/ai-gemini/src/adapters/text.ts | head -100

Repository: TanStack/ai

Length of output: 3603


🏁 Script executed:

# Check the chatStream method in OpenAITextAdapter
rg "chatStream" packages/typescript/ai-openai/src/adapters/text.ts -A 30

Repository: TanStack/ai

Length of output: 2275


🏁 Script executed:

# Check the chatStream method in GeminiTextAdapter
rg "chatStream" packages/typescript/ai-gemini/src/adapters/text.ts -A 30

Repository: TanStack/ai

Length of output: 1029


🏁 Script executed:

# Find createOpenAIClient function
rg "export.*createOpenAIClient|function createOpenAIClient" packages/typescript/ai-openai -A 20

Repository: TanStack/ai

Length of output: 1709


🏁 Script executed:

# Find createGeminiClient function
rg "export.*createGeminiClient|function createGeminiClient" packages/typescript/ai-gemini -A 20

Repository: TanStack/ai

Length of output: 1719


🏁 Script executed:

# Check OpenAIClientConfig interface
rg "interface OpenAIClientConfig|type OpenAIClientConfig" packages/typescript/ai-openai -B 2 -A 10

Repository: TanStack/ai

Length of output: 1564


🏁 Script executed:

# Check GeminiClientConfig interface
rg "interface GeminiClientConfig|type GeminiClientConfig" packages/typescript/ai-gemini -B 2 -A 10

Repository: TanStack/ai

Length of output: 1397


🌐 Web query:

OpenAI Node.js SDK configuration timeout retries headers per-request

💡 Result:

Short answer — how to configure these in the official OpenAI Node.js SDK:

  • Client-level:

    • timeout (ms): pass to the client constructor. Default = 10 minutes (600000 ms). Example: new OpenAI({ timeout: 20 * 1000, maxRetries: 0 }). [1][2]
    • maxRetries: pass to the client constructor. Default = 2. Example above. [1][2]
    • fetchOptions: RequestInit to customize fetch behavior (proxy/agent, headers, etc.) for all requests. [1]
  • Per-request overrides:

    • Pass a second options object to the API call (same keys as client-level). Examples:
      • timeout override: await client.chat.completions.create(params, { timeout: 5 * 1000 });
      • maxRetries override: await client.chat.completions.create(params, { maxRetries: 5 });
      • per-request headers: use fetchOptions.headers in the second arg: await client.chat.completions.create(params, { fetchOptions: { headers: { 'X-My-Header': 'value' } } });. [1][2]

Notes:

  • Requests that time out are retried by default (2 attempts) unless you change maxRetries. [1][2]
  • You can also pass a custom fetch implementation via new OpenAI({ fetch }) or tweak lower-level fetch options via fetchOptions. [1]

Sources: OpenAI Node.js README (official repo) and package README. [1][2]

References
[1] openai/openai-node README (GitHub).
[2] openai package README (unpkg/npm).


🌐 Web query:

Google GenAI JavaScript SDK configuration timeout retries headers per-request

💡 Result:

Short answer — yes. In the Google GenAI JavaScript/TypeScript SDK you can configure timeout, retries and per-request headers either globally when you create the client or per-request via the request/config/http options.

How to set them (concise):

  • Client-level (constructor) and per-request fields

    • Use the SDK’s http/httpOptions (or per-call config) to set timeout and headers. timeout is in milliseconds; headers override/merge per-request. Example pattern: create client with default http options, then pass per-call options to override. (googleapis.github.io)
  • Timeout

    • Set timeout (ms) in the http/httpOptions (or RequestOptions.timeout for Vertex AI). Per-request timeout overrides the client default. (googleapis.github.io)
  • Retries

    • The SDK exposes retry configuration (retry / retry_options) you can set in http options or RequestOptions (Vertex AI uses google api_core retry objects in some implementations). Configure attempts, initial delay, backoff, jitter and which status codes to retry. Per-request retry options override constructor defaults. (googleapis.github.io)
  • Headers (per-request)

    • Use headers / customHeaders (or httpOptions.headers / RequestOptions.customHeaders) to pass per-request HTTP headers; they merge/override client defaults. There’s also apiClient / x-goog-api-client support for SDK identifiers. (cloud.google.com)

Minimal example (illustrative):

  • Client default:
    const ai = new GoogleGenAI({ /.../ , httpOptions: { timeout: 10000, headers: { 'X-App': 'me' }, retry_options: { attempts: 4, initial_delay:1 } } });
  • Per-call override:
    await ai.models.generateContent({ model:'gemini-2.5', contents:'...', httpOptions: { timeout: 2000, headers: { Authorization: 'Bearer ...' } } });

References:

  1. Google Gen AI JS SDK docs (client & examples). (googleapis.github.io)
  2. Vertex AI RequestOptions (customHeaders, timeout). (cloud.google.com)
  3. GenAI SDK http/retry options (httpOptions / retry_options, merging behavior). (googleapis.github.io)

If you want, tell me which API (Gemini Developer API vs Vertex AI), which SDK version, and I’ll give an exact code snippet for your case.

Citations:


TextAdapterConfig properties (timeout/maxRetries/headers) are defined but not applied—either remove them or pass them to SDK clients. Both OpenAI and Google GenAI SDKs support timeout, retries, and headers at client-level initialization and per-request override. Currently, OpenAIClientConfig and GeminiClientConfig only apply apiKey and baseURL, ignoring timeout/maxRetries/headers from TextAdapterConfig. Either remove these unused properties or apply them: at client creation in createOpenAIClient/createGeminiClient, and at per-request time in chatStream/structuredOutput.

Comment on lines +14 to +69
/**
* Type-safe helper to create a messages array constrained by a model's supported modalities.
*
* This function provides compile-time checking that your messages only contain
* content types supported by the specified model. It's particularly useful when
* combining typed messages with untyped data (like from request.json()).
*
* @example
* ```typescript
* import { messages, chat } from '@tanstack/ai'
* import { openai } from '@tanstack/ai-openai'
*
* const adapter = openai()
*
* // This will error at compile time because gpt-4o only supports text+image
* const msgs = messages({ adapter, model: 'gpt-4o' }, [
* {
* role: 'user',
* content: [
* { type: 'video', source: { type: 'url', value: '...' } } // Error!
* ]
* }
* ])
* ```
*/
export function messages<
TAdapter extends AIAdapter<any, any, any, any, any, any>,
const TModel extends TAdapter extends AIAdapter<
infer Models,
any,
any,
any,
any,
any
>
? Models[number]
: string,
>(
_options: { adapter: TAdapter; model: TModel },
msgs: TAdapter extends AIAdapter<
any,
any,
any,
any,
any,
infer ModelInputModalities
>
? TModel extends keyof ModelInputModalities
? ModelInputModalities[TModel] extends ReadonlyArray<Modality>
? Array<ConstrainedModelMessage<ModelInputModalities[TModel]>>
: Array<ModelMessage>
: Array<ModelMessage>
: Array<ModelMessage>,
): typeof msgs {
return msgs
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

TSDoc example likely out-of-date + consider ReadonlyArray + silence unused _options.

  • The example references chat and openai(); PR context suggests new entry points (e.g. ai() and split adapters). Update the snippet to match the actual exported API to avoid misleading public docs.
  • msgs should accept readonly arrays (common with as const) since the helper doesn’t mutate.
  • If noUnusedParameters/ESLint is enabled, _options may still error; void _options keeps runtime effectively unchanged.
 export function messages<
   TAdapter extends AIAdapter<any, any, any, any, any, any>,
   const TModel extends TAdapter extends AIAdapter<
     infer Models,
     any,
     any,
     any,
     any,
     any
   >
     ? Models[number]
     : string,
 >(
   _options: { adapter: TAdapter; model: TModel },
   msgs: TAdapter extends AIAdapter<
     any,
     any,
     any,
     any,
     any,
     infer ModelInputModalities
   >
     ? TModel extends keyof ModelInputModalities
       ? ModelInputModalities[TModel] extends ReadonlyArray<Modality>
-        ? Array<ConstrainedModelMessage<ModelInputModalities[TModel]>>
-        : Array<ModelMessage>
-      : Array<ModelMessage>
-    : Array<ModelMessage>,
+        ? ReadonlyArray<ConstrainedModelMessage<ModelInputModalities[TModel]>>
+        : ReadonlyArray<ModelMessage>
+      : ReadonlyArray<ModelMessage>
+    : ReadonlyArray<ModelMessage>,
 ): typeof msgs {
+  void _options
   return msgs
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* Type-safe helper to create a messages array constrained by a model's supported modalities.
*
* This function provides compile-time checking that your messages only contain
* content types supported by the specified model. It's particularly useful when
* combining typed messages with untyped data (like from request.json()).
*
* @example
* ```typescript
* import { messages, chat } from '@tanstack/ai'
* import { openai } from '@tanstack/ai-openai'
*
* const adapter = openai()
*
* // This will error at compile time because gpt-4o only supports text+image
* const msgs = messages({ adapter, model: 'gpt-4o' }, [
* {
* role: 'user',
* content: [
* { type: 'video', source: { type: 'url', value: '...' } } // Error!
* ]
* }
* ])
* ```
*/
export function messages<
TAdapter extends AIAdapter<any, any, any, any, any, any>,
const TModel extends TAdapter extends AIAdapter<
infer Models,
any,
any,
any,
any,
any
>
? Models[number]
: string,
>(
_options: { adapter: TAdapter; model: TModel },
msgs: TAdapter extends AIAdapter<
any,
any,
any,
any,
any,
infer ModelInputModalities
>
? TModel extends keyof ModelInputModalities
? ModelInputModalities[TModel] extends ReadonlyArray<Modality>
? Array<ConstrainedModelMessage<ModelInputModalities[TModel]>>
: Array<ModelMessage>
: Array<ModelMessage>
: Array<ModelMessage>,
): typeof msgs {
return msgs
}
/**
* Type-safe helper to create a messages array constrained by a model's supported modalities.
*
* This function provides compile-time checking that your messages only contain
* content types supported by the specified model. It's particularly useful when
* combining typed messages with untyped data (like from request.json()).
*
* @example
*
🤖 Prompt for AI Agents
packages/typescript/ai/src/activities/text/messages.ts lines 14-69: the TSDoc
example is outdated (references chat and openai()), the msgs parameter should
accept ReadonlyArray so callers can pass as const, and the unused _options
parameter should be silenced; update the doc example to use the current public
API (e.g., ai() and the new adapter entry points) to match exports, change the
msgs type to also accept ReadonlyArray<...> variants wherever arrays are
returned so readonly inputs compile, and add a runtime noop reference like void
_options at the top of the function body to avoid noUnusedParameters/ESLint
errors while preserving behavior.

@@ -0,0 +1,235 @@
import { toJSONSchema } from 'zod'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find and inspect package.json files to check zod version
find . -name "package.json" -type f | head -20

Repository: TanStack/ai

Length of output: 924


🏁 Script executed:

# Check the specific file's line 181 to see the full comment
sed -n '175,190p' packages/typescript/ai/src/activities/text/tools/zod-converter.ts

Repository: TanStack/ai

Length of output: 477


🏁 Script executed:

# Check package.json in the project root and relevant workspace directories
if [ -f "package.json" ]; then
  echo "=== Root package.json zod dependency ===" 
  jq '.dependencies.zod // .devDependencies.zod // "not found"' package.json
fi

# Check in the typescript/ai package
if [ -f "packages/typescript/ai/package.json" ]; then
  echo "=== packages/typescript/ai/package.json zod dependency ==="
  jq '.dependencies.zod // .devDependencies.zod // "not found"' packages/typescript/ai/package.json
fi

Repository: TanStack/ai

Length of output: 176


🌐 Web query:

zod v4 toJSONSchema function API documentation

💡 Result:

z.toJSONSchema(schema, params?) — converts a Zod v4 schema to JSON Schema. Key details:

  • Signature: z.toJSONSchema(schema, ToJSONSchemaParams?). (zod.dev)
  • Important ToJSONSchemaParams:
    • target?: "draft-4" | "draft-7" | "draft-2020-12" | "openapi-3.0" (default: draft-2020-12). (zod.dev)
    • metadata?: $ZodRegistry<Record<string, any>> (use z.globalRegistry / .meta()). (zod.dev)
    • unrepresentable?: "throw" | "any" (default "throw") — controls handling of Zod types that have no JSON Schema equivalent. (zod.dev)
    • cycles?: "ref" | "throw" (default "ref") — how to handle cyclic schemas. (zod.dev)
    • reused?: "ref" | "inline" (default "inline") — how to handle reused schemas. (zod.dev)
    • uri?: (id: string) => string — mapper for external $ref URIs. (zod.dev)
  • Notes:
    • Some Zod types are unrepresentable (bigint, date, map, set, transform, etc.); default behavior is to throw unless unrepresentable: "any" is set. (zod.dev)
    • Converts many Zod checks to formats, patterns, numeric constraints, object additionalProperties behavior, etc.; see the JSON Schema docs for detailed mapping and examples. (zod.dev)

Official docs (full reference and examples): Zod JSON Schema docs. (zod.dev)

Citations:


🏁 Script executed:

# Check if zod package is installed and examine its exports
if [ -d "node_modules/zod" ]; then
  echo "=== Checking zod package exports ==="
  cat node_modules/zod/package.json | jq '.exports' | head -30
  
  echo -e "\n=== Checking zod lib/types.d.ts for toJSONSchema ==="
  grep -n "toJSONSchema" node_modules/zod/lib/types.d.ts | head -5
fi

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search the codebase for any other references to Alcyone Labs or toJSONSchema to understand usage pattern
rg -i "alcyone|toJSONSchema" --type ts --type js -B 2 -A 2

Repository: TanStack/ai

Length of output: 22211


Update comment on line 181 to remove outdated fork reference.

The import import { toJSONSchema } from 'zod' is correct. However, the comment "Use Alcyone Labs fork which is compatible with Zod v4" is outdated. toJSONSchema is now part of Zod v4's standard API. Update the comment to reflect this: // Use Zod's built-in toJSONSchema for OpenAI 3.0 compatibility or similar.

🤖 Prompt for AI Agents
In packages/typescript/ai/src/activities/text/tools/zod-converter.ts around line
181, update the outdated comment that references the Alcyone Labs fork; replace
it with a current comment noting that toJSONSchema is provided by Zod v4 (e.g.
"// Use Zod's built-in toJSONSchema for OpenAI 3.0 compatibility") so the
comment accurately reflects that toJSONSchema is part of Zod v4's standard API.

Comment on lines +59 to +72
} else if (wasOptional) {
// Make optional fields nullable by adding null to the type
if (prop.type && !Array.isArray(prop.type)) {
properties[propName] = {
...prop,
type: [prop.type, 'null'],
}
} else if (Array.isArray(prop.type) && !prop.type.includes('null')) {
properties[propName] = {
...prop,
type: [...prop.type, 'null'],
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle edge case: optional fields without type property.

The logic for making optional fields nullable assumes prop.type exists. However, JSON Schema properties can omit type when using oneOf, anyOf, or other combinators. If an optional field lacks a type property, it won't be made nullable, which could violate OpenAI's structured output requirements.

Consider adding a fallback for properties without type:

       } else if (wasOptional) {
         // Make optional fields nullable by adding null to the type
         if (prop.type && !Array.isArray(prop.type)) {
           properties[propName] = {
             ...prop,
             type: [prop.type, 'null'],
           }
         } else if (Array.isArray(prop.type) && !prop.type.includes('null')) {
           properties[propName] = {
             ...prop,
             type: [...prop.type, 'null'],
           }
+        } else if (!prop.type && (prop.oneOf || prop.anyOf || prop.allOf)) {
+          // For schema composition, wrap in anyOf with null
+          properties[propName] = {
+            ...prop,
+            anyOf: [...(prop.anyOf || [prop]), { type: 'null' }],
+          }
         }
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} else if (wasOptional) {
// Make optional fields nullable by adding null to the type
if (prop.type && !Array.isArray(prop.type)) {
properties[propName] = {
...prop,
type: [prop.type, 'null'],
}
} else if (Array.isArray(prop.type) && !prop.type.includes('null')) {
properties[propName] = {
...prop,
type: [...prop.type, 'null'],
}
}
}
} else if (wasOptional) {
// Make optional fields nullable by adding null to the type
if (prop.type && !Array.isArray(prop.type)) {
properties[propName] = {
...prop,
type: [prop.type, 'null'],
}
} else if (Array.isArray(prop.type) && !prop.type.includes('null')) {
properties[propName] = {
...prop,
type: [...prop.type, 'null'],
}
} else if (!prop.type && (prop.oneOf || prop.anyOf || prop.allOf)) {
// For schema composition, wrap in anyOf with null
properties[propName] = {
...prop,
anyOf: [...(prop.anyOf || [prop]), { type: 'null' }],
}
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai/src/activities/text/tools/zod-converter.ts around
lines 59 to 72, the optional-to-nullable logic assumes prop.type exists; add a
fallback branch for when prop.type is undefined so optional fields without an
explicit type become nullable. Specifically, inside the wasOptional branch,
detect if prop.type is missing and set properties[propName] = { ...prop, type:
['null'] } (so the schema allows null); keep the existing branches for
single-type and array-type cases unchanged.

tannerlinsley
tannerlinsley previously approved these changes Dec 12, 2025
@tannerlinsley tannerlinsley dismissed their stale review December 12, 2025 21:15

I guess I hit "approve" instead of "start review" and now all of my queued comments are gone from the window...

* video generation

* text to speech and speech to text

* adding some cool audio UI to the dev panel

* small fixups

* ci: apply automated fixes

* client fixes on tool calls

* few more client fixups

* one last test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

♻️ Duplicate comments (3)
packages/typescript/ai/src/types.ts (2)

572-578: Past concern: Clarify relationship between outputSchema and output fields.

This issue was previously flagged. The ambiguity between output?: TOutput (line 571) and outputSchema?: z.ZodType (line 578) still exists, with no clear precedence or validation when both are provided.


794-798: Past concern: Standardize token usage field naming.

This issue was previously flagged. The usage object in ImageGenerationResult uses inputTokens and outputTokens, while other result types use promptTokens and completionTokens. This naming inconsistency should be addressed.

docs/adapters/openai.md (1)

147-161: Verify providerOptions parameters match the underlying API.

This issue was flagged in a previous review. The example uses max_tokens, frequency_penalty, presence_penalty, and stop which are Chat Completions API parameters. If the underlying adapter uses the Responses API (client.responses.create()), these should be updated to valid Responses API parameters like max_output_tokens.

#!/bin/bash
# Check which OpenAI API endpoint the text adapter uses
rg -n "responses\.create|chat\.completions\.create" packages/typescript/ai-openai/src/adapters/text.ts
🧹 Nitpick comments (15)
packages/typescript/ai/src/activities/transcription/adapter.ts (2)

61-74: Make ID generation robust (avoid Math.random() / collisions).

generateId() is collision-prone and not ideal for tracing. Prefer crypto.randomUUID() (with an isomorphic fallback) and consider keeping it stable-format for logs/telemetry.

 export abstract class BaseTranscriptionAdapter<
   TModels extends ReadonlyArray<string> = ReadonlyArray<string>,
   TProviderOptions extends object = Record<string, unknown>,
 > implements TranscriptionAdapter<TModels, TProviderOptions> {
@@
-  protected generateId(): string {
-    return `${this.name}-${Date.now()}-${Math.random().toString(36).substring(7)}`
-  }
+  protected generateId(): string {
+    const uuid =
+      typeof crypto !== 'undefined' && 'randomUUID' in crypto
+        ? crypto.randomUUID()
+        : `${Date.now()}-${Math.random().toString(16).slice(2)}`
+    return `${this.name}-${uuid}`
+  }
 }

61-65: Consider readonly config to prevent accidental mutation.

-  protected config: TranscriptionAdapterConfig
+  protected readonly config: TranscriptionAdapterConfig
packages/typescript/ai/src/activities/transcription/index.ts (2)

44-62: Avoid duplicating TranscriptionOptions shape (drift risk).

TranscriptionActivityOptions re-specifies fields that already exist in TranscriptionOptions (audio, language, prompt, responseFormat, providerOptions). Reuse the shared type so future changes don’t silently diverge.

-import type { TranscriptionResult } from '../../types'
+import type { TranscriptionOptions, TranscriptionResult } from '../../types'
@@
 export interface TranscriptionActivityOptions<
   TAdapter extends TranscriptionAdapter<ReadonlyArray<string>, object>,
   TModel extends TranscriptionModels<TAdapter>,
-> {
+> extends Omit<
+    TranscriptionOptions<TranscriptionProviderOptions<TAdapter>>,
+    'model'
+  > {
   /** The transcription adapter to use */
   adapter: TAdapter & { kind: typeof kind }
   /** The model name (autocompletes based on adapter) */
   model: TModel
-  /** The audio data to transcribe - can be base64 string, File, Blob, or Buffer */
-  audio: string | File | Blob | ArrayBuffer
-  /** The language of the audio in ISO-639-1 format (e.g., 'en') */
-  language?: string
-  /** An optional prompt to guide the transcription */
-  prompt?: string
-  /** The format of the transcription output */
-  responseFormat?: 'json' | 'text' | 'srt' | 'verbose_json' | 'vtt'
-  /** Provider-specific options for transcription */
-  providerOptions?: TranscriptionProviderOptions<TAdapter>
 }

109-118: Optional: add a runtime kind assert for better DX on mis-wired adapters.

Type checks can be bypassed with any; a simple runtime guard would produce a clearer error than “transcribe is not a function”.

packages/typescript/ai-client/src/chat-client.ts (1)

472-476: Consider adding safeguards for continuation chains.

The current design allows indefinite continuation: stream → drain → checkForContinuation → new stream → repeat. While this enables multi-turn tool calling, it could theoretically loop indefinitely if tool execution patterns create cycles.

Consider adding:

  • A maximum continuation depth counter
  • A circuit breaker if the same tool fails repeatedly
  • Logging/telemetry to track continuation chain length

This is not an immediate issue but may help prevent edge-case runaway scenarios in production.

packages/typescript/ai/src/types.ts (2)

1054-1056: Consider renaming chatStream to align with Text-based terminology.

The AIAdapter.chatStream method retains "chat" terminology while the rest of the codebase has been renamed to use "Text" (e.g., TextOptions, TextCompletionChunk, TextStreamOptionsUnion).

If AIAdapter is the legacy interface being maintained for backwards compatibility, this is acceptable. However, if it's part of the new API surface, consider renaming to textStream for consistency.


1240-1283: Consider refactoring to reduce code duplication.

The TextOptionsForTextAdapter type (lines 1240-1283) duplicates the logic from TextStreamOptionsForModel (lines 1128-1183). Both types follow the same pattern:

  • Omit the same fields from TextOptions
  • Apply identical providerOptions narrowing
  • Use the same messages constraining logic based on ModelInputModalities

Consider extracting this pattern into a shared utility type to improve maintainability:

type BaseTextOptionsForAdapter<
  TAdapter,
  TModel extends string,
  ModelProviderOptions,
  ModelInputModalities,
  MessageMetadata
> = Omit<TextOptions, 'model' | 'providerOptions' | 'responseFormat' | 'messages'> & {
  adapter: TAdapter
  model: TModel
  providerOptions?: TModel extends keyof ModelProviderOptions
    ? ModelProviderOptions[TModel]
    : never
  messages: // ... shared logic
}

Then both types could be defined more concisely by extracting the appropriate generics from their respective adapter interfaces.

packages/typescript/ai-gemini/src/model-meta.ts (1)

782-793: Consider reordering type alias after the const declaration for consistency.

GeminiImageModels type alias (line 784) is defined before GEMINI_IMAGE_MODELS const (line 786), while GeminiModels (line 782) follows the pattern of being defined after GEMINI_MODELS (line 771). While TypeScript allows forward references in type positions, consistent ordering improves readability.

 export type GeminiModels = (typeof GEMINI_MODELS)[number]
 
-export type GeminiImageModels = (typeof GEMINI_IMAGE_MODELS)[number]
-
 export const GEMINI_IMAGE_MODELS = [
   GEMINI_2_5_FLASH_IMAGE.name,
   GEMINI_2_FLASH_IMAGE.name,
   IMAGEN_3.name,
   IMAGEN_4_GENERATE.name,
   IMAGEN_4_GENERATE_FAST.name,
   IMAGEN_4_GENERATE_ULTRA.name,
 ] as const
+
+export type GeminiImageModels = (typeof GEMINI_IMAGE_MODELS)[number]
packages/typescript/ai/src/activities/tts/index.ts (1)

55-57: Consider making format type more flexible for provider extensibility.

The format option is hardcoded to OpenAI-specific formats ('mp3' | 'opus' | 'aac' | 'flac' | 'wav' | 'pcm'). Gemini TTS may support different formats. Consider extracting this from the adapter type or using a more generic string type with provider-specific constraints.

   /** The output audio format */
-  format?: 'mp3' | 'opus' | 'aac' | 'flac' | 'wav' | 'pcm'
+  format?: string

Alternatively, extract format types from the adapter similar to how models are extracted, allowing each provider to define their supported formats.

packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (2)

12-17: Naming convention: timestamp_granularities uses snake_case.

Per coding guidelines, camelCase should be used for function and variable names. However, timestamp_granularities uses snake_case to match OpenAI's API. This is a reasonable trade-off for API consistency, but consider documenting this decision or using camelCase (timestampGranularities) with internal conversion.

If maintaining snake_case for API parity is intentional (common pattern when matching external APIs), consider adding a brief comment explaining this choice.


1-18: Consider expanding provider options for parity with OpenAI API.

The interface only exposes temperature and timestamp_granularities. The OpenAI transcription API supports additional useful options like language, prompt, and response_format (referenced in the relevant snippet from transcribe-provider-options.ts). Consider if these should be exposed for full API coverage, or document that this is an intentionally minimal interface.

Would you like me to help expand this interface with additional OpenAI transcription options?

packages/typescript/ai-gemini/src/index.ts (1)

5-71: Export parity: consider adding a GeminiTTSModel type re-export (missing today).
You export GeminiTTSModels (value) but don’t appear to export the corresponding model-name type (parity with GeminiTextModel / GeminiImageModel). This is a small DX gap for consumers wanting model autocomplete for TTS.

Proposed patch:

 export { GEMINI_TTS_MODELS as GeminiTTSModels } from './model-meta'
 export type { GeminiModels as GeminiTextModel } from './model-meta'
 export type { GeminiImageModels as GeminiImageModel } from './model-meta'
+export type { GeminiTTSModels as GeminiTTSModel } from './model-meta'
packages/typescript/ai/src/activities/video/index.ts (1)

114-226: Return type isn’t wired to request (you define VideoActivityResult but don’t use it).
Right now videoActivity always returns Promise<VideoJobResult | VideoStatusResult | VideoUrlResult>, so callers don’t get a type-safe result keyed by request.

One way to connect the pieces:

-export async function videoActivity<
+export async function videoActivity<
   TAdapter extends VideoAdapter<ReadonlyArray<string>, object>,
   TModel extends VideoModels<TAdapter>,
+  TRequest extends 'create' | 'status' | 'url' = 'create',
 >(
-  options:
-    | VideoCreateOptions<TAdapter, TModel>
-    | VideoStatusOptions<TAdapter, TModel>
-    | VideoUrlOptions<TAdapter, TModel>,
-): Promise<VideoJobResult | VideoStatusResult | VideoUrlResult> {
+  options: VideoActivityOptions<TAdapter, TModel, TRequest>,
+): VideoActivityResult<TRequest> {
   const { adapter, request = 'create' } = options
   switch (request) {
     case 'status': {
-      const statusOptions = options as VideoStatusOptions<TAdapter, TModel>
-      return adapter.getVideoStatus(statusOptions.jobId)
+      return adapter.getVideoStatus((options as VideoStatusOptions<TAdapter, TModel>).jobId) as any
     }
     case 'url': {
-      const urlOptions = options as VideoUrlOptions<TAdapter, TModel>
-      return adapter.getVideoUrl(urlOptions.jobId)
+      return adapter.getVideoUrl((options as VideoUrlOptions<TAdapter, TModel>).jobId) as any
     }
     case 'create':
     default: {
-      const createOptions = options as VideoCreateOptions<TAdapter, TModel>
+      const createOptions = options as VideoCreateOptions<TAdapter, TModel>
       return adapter.createVideoJob({
         model: createOptions.model,
         prompt: createOptions.prompt,
         size: createOptions.size,
         duration: createOptions.duration,
         providerOptions: createOptions.providerOptions,
-      })
+      }) as any
     }
   }
 }
packages/typescript/ai-openai/src/adapters/tts.ts (1)

55-79: Avoid as OpenAITTSVoice/OpenAITTSFormat casts; validate or type the public API.
Today voice/format can be any string at the @tanstack/ai layer, and the cast defeats type safety. If you want OpenAI-specific typing, consider exposing OpenAI-specific option types (or runtime guards) so invalid values fail predictably before the API call.

packages/typescript/ai-openai/src/video/video-provider-options.ts (1)

88-118: toApiSeconds can produce invalid OpenAIVideoSeconds unless callers validate first.
Right now toApiSeconds(5) returns '5' as OpenAIVideoSeconds. Consider either (a) folding validation into conversion, or (b) making the validation+conversion helper the “one true” path.

Example option:

-export function toApiSeconds(
-  seconds: number | string | undefined,
-): OpenAIVideoSeconds | undefined {
-  if (seconds === undefined) return undefined
-  return String(seconds) as OpenAIVideoSeconds
-}
+export function toApiSeconds(
+  model: string,
+  seconds: number | string | undefined,
+): OpenAIVideoSeconds | undefined {
+  validateVideoSeconds(model, seconds)
+  if (seconds === undefined) return undefined
+  return String(seconds) as OpenAIVideoSeconds
+}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 96a08b1 and 2b56f66.

⛔ Files ignored due to path filters (1)
  • packages/typescript/smoke-tests/adapters/fixtures/test-audio.mp3 is excluded by !**/*.mp3
📒 Files selected for processing (31)
  • docs/adapters/gemini.md (4 hunks)
  • docs/adapters/openai.md (9 hunks)
  • docs/config.json (4 hunks)
  • docs/guides/text-to-speech.md (1 hunks)
  • docs/guides/transcription.md (1 hunks)
  • docs/guides/video-generation.md (1 hunks)
  • packages/typescript/ai-client/src/chat-client.ts (4 hunks)
  • packages/typescript/ai-gemini/src/adapters/tts.ts (1 hunks)
  • packages/typescript/ai-gemini/src/index.ts (1 hunks)
  • packages/typescript/ai-gemini/src/model-meta.ts (10 hunks)
  • packages/typescript/ai-openai/src/adapters/transcription.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/tts.ts (1 hunks)
  • packages/typescript/ai-openai/src/adapters/video.ts (1 hunks)
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (1 hunks)
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts (1 hunks)
  • packages/typescript/ai-openai/src/index.ts (1 hunks)
  • packages/typescript/ai-openai/src/model-meta.ts (7 hunks)
  • packages/typescript/ai-openai/src/video/video-provider-options.ts (1 hunks)
  • packages/typescript/ai-react/src/use-chat.ts (1 hunks)
  • packages/typescript/ai-solid/src/use-chat.ts (1 hunks)
  • packages/typescript/ai-svelte/src/create-chat.svelte.ts (1 hunks)
  • packages/typescript/ai-vue/src/use-chat.ts (1 hunks)
  • packages/typescript/ai/src/activities/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/transcription/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/transcription/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/tts/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/tts/index.ts (1 hunks)
  • packages/typescript/ai/src/activities/video/adapter.ts (1 hunks)
  • packages/typescript/ai/src/activities/video/index.ts (1 hunks)
  • packages/typescript/ai/src/ai.ts (1 hunks)
  • packages/typescript/ai/src/types.ts (11 hunks)
✅ Files skipped from review due to trivial changes (3)
  • docs/guides/transcription.md
  • docs/guides/video-generation.md
  • packages/typescript/ai-svelte/src/create-chat.svelte.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/typescript/ai-openai/src/model-meta.ts
  • packages/typescript/ai/src/activities/index.ts
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-vue/src/use-chat.ts
  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai-react/src/use-chat.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai-solid/src/use-chat.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-vue/src/use-chat.ts
  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai-react/src/use-chat.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai-solid/src/use-chat.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
packages/typescript/*/src/adapters/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Files:

  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
packages/typescript/*/src/model-meta.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Files:

  • packages/typescript/ai-gemini/src/model-meta.ts
packages/typescript/*/src/index.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Export tree-shakeable adapters with clear subpath exports in package.json (e.g., @tanstack/ai/adapters, @tanstack/ai-openai/adapters) to minimize bundle size

Files:

  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
🧠 Learnings (10)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.784Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.784Z
Learning: Ensure provider-agnostic design where the core AI SDK remains independent of any specific AI provider implementation
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • docs/guides/text-to-speech.md
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • docs/adapters/openai.md
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • docs/adapters/gemini.md
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai-react/src/use-chat.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • docs/adapters/openai.md
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • docs/adapters/gemini.md
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • packages/typescript/ai/src/activities/video/index.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-gemini/src/adapters/tts.ts
  • docs/adapters/openai.md
  • packages/typescript/ai-openai/src/adapters/transcription.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/activities/transcription/index.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-openai/src/adapters/tts.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
  • packages/typescript/ai-openai/src/adapters/video.ts
  • packages/typescript/ai/src/ai.ts
  • packages/typescript/ai-gemini/src/index.ts
  • docs/adapters/gemini.md
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-openai/src/audio/tts-provider-options.ts
  • packages/typescript/ai-openai/src/audio/transcription-provider-options.ts
  • packages/typescript/ai/src/activities/tts/index.ts
  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai-gemini/src/index.ts
  • packages/typescript/ai-openai/src/index.ts
📚 Learning: 2025-12-13T17:09:09.784Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.784Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top

Applied to files:

  • packages/typescript/ai-react/src/use-chat.ts
  • docs/adapters/openai.md
  • docs/adapters/gemini.md
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai/src/activities/tts/adapter.ts
  • packages/typescript/ai-openai/src/video/video-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai/src/activities/transcription/adapter.ts
  • packages/typescript/ai/src/types.ts
  • packages/typescript/ai/src/activities/video/adapter.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai/src/types.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement isomorphic tool system using `toolDefinition()` with `.server()` and `.client()` implementations for dual-environment execution

Applied to files:

  • docs/adapters/openai.md
  • packages/typescript/ai/src/types.ts
  • docs/adapters/gemini.md
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with `toolDefinition()` and Zod schema inference

Applied to files:

  • docs/adapters/openai.md
  • packages/typescript/ai/src/types.ts
  • docs/adapters/gemini.md
🧬 Code graph analysis (9)
packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (2)
packages/typescript/ai-openai/src/index.ts (1)
  • OpenAITranscriptionProviderOptions (81-81)
packages/typescript/ai-openai/src/audio/transcribe-provider-options.ts (1)
  • TranscribeProviderOptions (1-77)
packages/typescript/ai/src/activities/tts/adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TTSOptions (876-889)
  • TTSResult (894-907)
packages/typescript/ai-openai/src/video/video-provider-options.ts (1)
packages/typescript/ai-openai/src/index.ts (4)
  • OpenAIVideoSize (57-57)
  • OpenAIVideoProviderOptions (55-55)
  • OpenAIVideoModelProviderOptionsByName (56-56)
  • OpenAIVideoDuration (58-58)
packages/typescript/ai-gemini/src/model-meta.ts (2)
packages/typescript/ai-gemini/src/text/text-provider-options.ts (3)
  • GeminiSafetyOptions (16-23)
  • GeminiGenerationConfigOptions (25-168)
  • GeminiCachedContentOptions (170-175)
packages/typescript/ai-gemini/src/index.ts (5)
  • GeminiModels (69-69)
  • GEMINI_MODELS (66-66)
  • GeminiImageModels (70-70)
  • GEMINI_IMAGE_MODELS (67-67)
  • GEMINI_TTS_MODELS (68-68)
packages/typescript/ai-openai/src/adapters/transcription.ts (6)
packages/typescript/ai/src/activities/index.ts (1)
  • BaseTranscriptionAdapter (247-247)
packages/typescript/ai/src/activities/transcription/index.ts (1)
  • BaseTranscriptionAdapter (125-125)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_TRANSCRIPTION_MODELS (1720-1725)
packages/typescript/ai-openai/src/audio/transcription-provider-options.ts (1)
  • OpenAITranscriptionProviderOptions (4-18)
packages/typescript/ai/src/types.ts (3)
  • TranscriptionOptions (917-932)
  • TranscriptionResult (967-982)
  • TranscriptionSegment (937-950)
packages/typescript/ai/src/activities/transcription/adapter.ts (1)
  • generateId (71-73)
packages/typescript/ai/src/activities/transcription/adapter.ts (1)
packages/typescript/ai/src/types.ts (2)
  • TranscriptionOptions (917-932)
  • TranscriptionResult (967-982)
packages/typescript/ai-openai/src/adapters/tts.ts (4)
packages/typescript/ai-openai/src/model-meta.ts (1)
  • OPENAI_TTS_MODELS (1711-1715)
packages/typescript/ai-openai/src/audio/tts-provider-options.ts (2)
  • OpenAITTSVoice (4-15)
  • OpenAITTSFormat (20-20)
packages/typescript/ai/src/types.ts (2)
  • TTSOptions (876-889)
  • TTSResult (894-907)
packages/typescript/ai-openai/src/audio/audio-provider-options.ts (3)
  • validateAudioInput (71-75)
  • validateSpeed (56-62)
  • validateInstructions (64-69)
packages/typescript/ai/src/activities/video/adapter.ts (4)
packages/typescript/ai/src/activities/index.ts (3)
  • VideoAdapterConfig (211-211)
  • VideoAdapter (210-210)
  • BaseVideoAdapter (209-209)
packages/typescript/ai/src/activities/video/index.ts (3)
  • VideoAdapterConfig (229-229)
  • VideoAdapter (229-229)
  • BaseVideoAdapter (230-230)
packages/typescript/ai/src/ai.ts (1)
  • VideoAdapter (313-313)
packages/typescript/ai/src/types.ts (4)
  • VideoGenerationOptions (811-824)
  • VideoJobResult (831-836)
  • VideoStatusResult (843-852)
  • VideoUrlResult (859-866)
packages/typescript/ai/src/ai.ts (4)
packages/typescript/ai/src/activities/index.ts (13)
  • GenerateAdapter (292-292)
  • TextAdapter (128-128)
  • EmbeddingAdapter (149-149)
  • SummarizeAdapter (168-168)
  • ImageAdapter (188-188)
  • VideoAdapter (210-210)
  • AnyAdapter (295-302)
  • TextModels (120-120)
  • EmbeddingModels (143-143)
  • SummarizeModels (162-162)
  • ImageModels (181-181)
  • VideoModels (201-201)
  • AnyAIAdapter (319-330)
packages/typescript/ai/src/index.ts (3)
  • GenerateAdapter (6-6)
  • AnyAdapter (5-5)
  • ai (3-3)
packages/typescript/ai/src/types.ts (7)
  • TextAdapter (1286-1286)
  • EmbeddingAdapter (1286-1286)
  • SummarizeAdapter (1286-1286)
  • AnyAdapter (1286-1286)
  • VideoStatusResult (843-852)
  • VideoUrlResult (859-866)
  • VideoJobResult (831-836)
packages/typescript/ai/src/activities/video/adapter.ts (1)
  • VideoAdapter (32-65)
🔇 Additional comments (34)
packages/typescript/ai-react/src/use-chat.ts (1)

83-91: LGTM! Behavioral improvement prevents premature request abortion.

Removing isLoading from the effect dependencies is correct and prevents the cleanup from running whenever loading state toggles. This ensures that continuation requests and streaming responses aren't aborted mid-flight. The cleanup now properly runs only on unmount or when the client instance changes, which aligns with React best practices for resource lifecycle management.

The comments clearly document the reasoning and warn against reintroducing the dependency.

packages/typescript/ai/src/activities/transcription/index.ts (1)

81-90: [Rewritten review comment]
[Classification tag]

packages/typescript/ai-client/src/chat-client.ts (3)

29-29: LGTM! Clean queue declaration.

The post-stream action queue is properly typed and initialized.


401-407: LGTM! Deferred continuation prevents nested streams.

The conditional queueing correctly prevents checkForContinuation from triggering a new stream while one is in progress.


443-449: LGTM! Consistent deferred continuation pattern.

The queueing logic mirrors the approach in addToolResult, maintaining consistency across tool-related operations.

packages/typescript/ai-vue/src/use-chat.ts (1)

42-44: LGTM! Unconditional cleanup is more robust.

Removing the isLoading guard and unconditionally calling client.stop() on disposal is a good practice. This prevents potential resource leaks from edge cases where streaming activity exists but isLoading might not accurately reflect it. The stop() method is idempotent—it checks if an abort controller exists before aborting and safely handles multiple calls.

Vue, React, and Solid all follow this same unconditional cleanup pattern. Note that Svelte intentionally handles cleanup differently (manual stop() calls only) due to Svelte 5 runes limitations, as documented in the implementation.

packages/typescript/ai/src/types.ts (3)

748-982: LGTM - Well-structured multimodal types.

The new Image Generation (lines 748-799), Video Generation (lines 801-867), TTS (lines 868-907), and Transcription (lines 909-982) types are well-structured with comprehensive JSDoc documentation. The experimental markers on Video types appropriately signal the feature's status.


1209-1286: LGTM - Consistent adapter extraction types.

The new adapter extraction types (ExtractModelsFromTextAdapter, ExtractModelsFromEmbeddingAdapter, ExtractModelsFromSummarizeAdapter, ExtractModelsFromAnyAdapter) follow consistent patterns and properly support the tree-shakeable adapter architecture outlined in the coding guidelines. The type-only re-exports are appropriately structured.

Based on learnings, these extraction types enable tree-shakeable adapter architecture by allowing model extraction from specialized adapters.


7-12: The adapter types are properly implemented and exported. All four adapter types (TextAdapter, EmbeddingAdapter, SummarizeAdapter, AnyAdapter) are correctly defined in their respective adapter modules and exported from ./activities/index.ts, matching the import statement in types.ts. The tree-shakeable adapter architecture is correctly structured.

packages/typescript/ai-solid/src/use-chat.ts (1)

65-73: Cleanup improvement looks good—cleanup behavior is effectively consistent across all framework adapters.

Removing the isLoading guard ensures that stop() is always called on component unmount, which is the correct behavior since the stop() method should be idempotent. Any in-flight requests should be cancelled when the component disposes, regardless of the current loading state. The added comments accurately describe the SolidJS disposal lifecycle.

Cleanup patterns across frameworks are functionally equivalent:

  • React: useEffect with [client] dependency (cleanup on unmount or when client changes, though client only changes when clientId changes, which is rare)
  • SolidJS: createEffect without dependencies (cleanup on unmount only)
  • Vue: onScopeDispose (cleanup on scope disposal)
  • Svelte: Manual cleanup required due to framework constraints

The slight differences in dependency arrays reflect framework-specific lifecycle patterns but result in the same practical behavior: cleanup runs on unmount, and stop() is reliably called to cancel in-flight requests.

packages/typescript/ai-openai/src/audio/tts-provider-options.ts (1)

1-31: LGTM!

The TTS provider options type definitions are well-structured with appropriate JSDoc documentation. The types align with OpenAI's TTS API and follow the tree-shakeable adapter pattern established in the codebase.

docs/guides/text-to-speech.md (1)

1-248: LGTM!

Comprehensive and well-structured TTS documentation that covers both OpenAI and Gemini providers. The code examples are consistent with the new tree-shakeable adapter architecture, and the best practices section provides valuable guidance.

packages/typescript/ai-gemini/src/model-meta.ts (1)

797-801: Experimental TTS models appropriately documented.

The hardcoded model string is acceptable here given that the full model metadata constant (GEMINI_2_5_FLASH_TTS) is commented out as experimental. The @experimental JSDoc tag clearly communicates this status.

packages/typescript/ai/src/activities/tts/index.ts (1)

107-118: LGTM!

Clean activity implementation with proper delegation to the adapter and appropriate type re-exports for the tree-shakeable architecture.

docs/config.json (2)

72-80: LGTM!

Navigation entries for Text-to-Speech and Transcription guides are correctly added and follow the existing structure.


174-179: The reference targets exist: docs/reference/functions/text.md and docs/reference/functions/textOptions.md are both present in the repository.

packages/typescript/ai/src/activities/video/adapter.ts (2)

32-65: LGTM!

Well-designed video adapter interface with proper support for async job-based workflows. The generic parameters provide good type safety, and the @experimental tags appropriately communicate the feature's status.


73-101: LGTM!

The BaseVideoAdapter abstract class follows established patterns with proper use of declare for type-only properties and a useful generateId utility. The implementation provides a solid foundation for provider-specific adapters.

packages/typescript/ai-gemini/src/index.ts (1)

76-83: Deprecation messaging looks clear + keeps back-compat.
The @deprecated block points users to the new split adapters while preserving GeminiAdapter exports for existing users. As per coding guidelines, this aligns with the tree-shakeable adapter architecture.

packages/typescript/ai-openai/src/adapters/tts.ts (1)

132-169: Factories match the established pattern.
createOpenaiTTS + env-based openaiTTS are consistent with other adapters and keep the tree-shakeable surface clean.

packages/typescript/ai/src/activities/tts/adapter.ts (1)

1-72: Nice, minimal base contract for TTS adapters.
Clear discriminator (kind: 'tts') + typed provider options hook.

packages/typescript/ai-gemini/src/adapters/tts.ts (2)

155-192: Factory helpers look consistent with other Gemini adapters.
Explicit-key + env-key factory pair is clean and tree-shakeable. As per coding guidelines, this matches the split adapter architecture.


86-101: TTS wiring is correct and matches current @google/genai SDK behavior.

The responseModalities: ['AUDIO'] and speechConfig.voiceConfig.prebuiltVoiceConfig.voiceName structure conform to the official SDK documentation. Response handling correctly extracts audio via candidates[0].content.parts and processes inlineData for audio content.

packages/typescript/ai/src/activities/video/index.ts (1)

194-226: No action required—field names match the VideoAdapter contract.

The videoActivity function correctly passes duration to adapter.createVideoJob(), which aligns with the VideoGenerationOptions interface definition. The field is properly documented as "Video duration in seconds" in the type definition, so there is no naming inconsistency or risk of drifting from provider quirks.

packages/typescript/ai-openai/src/adapters/transcription.ts (1)

49-125: The verbose_json cast is necessary and correct — the issue is incomplete OpenAI SDK typings, not incompatibility.

The OpenAI SDK v6.9.1 typings only define Transcription { text: string }, but the actual verbose_json response includes duration, language, segments (with id, start, end, text, avg_logprob), and words (with word, start, end) per the official OpenAI API specification. The inline cast is a reasonable workaround for this known typings gap. The field mappings (avg_logprobconfidence via Math.exp()) align with the API schema. The real brittleness risk is if OpenAI's API response shape changes, not SDK type drift; consider extracting the cast to a dedicated type file or adding a comment explaining why the cast exists.

packages/typescript/ai-openai/src/index.ts (1)

1-112: Well-structured tree-shakeable exports.

The organization with clear section comments and consistent export patterns aligns well with the coding guidelines for tree-shakeable adapter architecture. The @experimental JSDoc tag on Video adapter appropriately signals API instability.

docs/adapters/openai.md (2)

59-91: Comprehensive model documentation.

The categorized model lists provide clear guidance for users selecting models for each capability.


178-245: Well-documented new adapter capabilities.

The Embeddings and Summarization sections provide clear examples including batch operations and provider options. The API patterns are consistent with the text adapter examples.

packages/typescript/ai/src/ai.ts (3)

72-157: Type mappings would need updates if TTS/Transcription are added.

If the GenerateAdapter union is extended to include TTS and Transcription adapters, the AIOptionsFor and AIReturnFor conditional types will also need corresponding branches to handle these adapter kinds, otherwise they'll resolve to never.


293-302: Clean routing implementation.

The delegation pattern using activityMap is simple and extensible. The error message for unknown adapter kinds is helpful for debugging.


308-339: Comprehensive re-exports for public API surface.

The re-exports provide a clean public API, including both current adapter types and legacy aliases for backwards compatibility.

packages/typescript/ai-openai/src/adapters/video.ts (3)

144-153: Graceful 404 handling in status check.

Returning a failed status instead of throwing on 404 is appropriate for polling scenarios where the job may have been cleaned up.


303-323: Safe status mapping with sensible default.

Defaulting unknown statuses to 'processing' is a reasonable choice for forward compatibility when OpenAI adds new status values.


346-400: Consistent factory function pattern.

The createOpenaiVideo and openaiVideo factory functions follow the same pattern established by other adapters (text, embed, image), maintaining API consistency across the package.

Comment on lines +62 to +66
- `gemini-2.0-flash-exp` - Gemini 2.0 Flash (fast, efficient)
- `gemini-2.0-flash-lite` - Gemini 2.0 Flash Lite (fastest)
- `gemini-2.5-pro` - Gemini 2.5 Pro (most capable)
- `gemini-2.5-flash` - Gemini 2.5 Flash
- `gemini-exp-1206` - Experimental Pro model
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check Gemini model names in model-meta.ts
echo "=== Model definitions in model-meta.ts ==="
rg -n "name:" packages/typescript/ai-gemini/src/model-meta.ts | grep -i "gemini-2"

echo ""
echo "=== Full context for gemini-2.0-flash models ==="
rg -B2 -A2 "gemini-2\.0-flash" packages/typescript/ai-gemini/src/model-meta.ts

Repository: TanStack/ai

Length of output: 1214


🏁 Script executed:

#!/bin/bash
# Check model names used in the documentation
echo "=== Model references in gemini.md ==="
rg -n "gemini-2\.[0-9]" docs/adapters/gemini.md | head -30

Repository: TanStack/ai

Length of output: 712


Update model name from gemini-2.0-flash-exp to gemini-2.0-flash.

The documented model gemini-2.0-flash-exp does not exist in model-meta.ts. The actual model name is gemini-2.0-flash. This incorrect name appears in the model list (line 62) and throughout code examples (lines 25, 42, 96, 128, 141, 232), which will cause failures when users run these examples.

🤖 Prompt for AI Agents
In docs/adapters/gemini.md around lines 62 to 66 and at the other mentioned
locations (lines 25, 42, 96, 128, 141, 232), the model name
`gemini-2.0-flash-exp` is incorrect; replace every occurrence with the correct
model name `gemini-2.0-flash` so the examples and list match model-meta.ts and
will run successfully; update the model list entry and all code
examples/snippets to use `gemini-2.0-flash` consistently.

Comment on lines +68 to +71
### Embedding Models

- `gemini-embedding-001` - Text embedding model
- `text-embedding-004` - Latest embedding model
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg -n "GEMINI_EMBEDDING" packages/typescript/ai-gemini/src/model-meta.ts -A 10

Repository: TanStack/ai

Length of output: 1017


🏁 Script executed:

rg "text-embedding-004" packages/typescript/ai-gemini/

Repository: TanStack/ai

Length of output: 338


🏁 Script executed:

rg -B 5 -A 5 "text-embedding-004" packages/typescript/ai-gemini/src/adapters/embed.ts

Repository: TanStack/ai

Length of output: 620


🏁 Script executed:

cat -n packages/typescript/ai-gemini/src/model-meta.ts | sed -n '720,800p'

Repository: TanStack/ai

Length of output: 2989


GEMINI_EMBEDDING_MODELS registry is incomplete.

The embedding model text-embedding-004 is documented, used as the default model in the embed adapter, and listed in GeminiEmbeddingModels, but it's missing from GEMINI_EMBEDDING_MODELS in model-meta.ts. Additionally, the model list in embed.ts (['text-embedding-004', 'embedding-001']) diverges from model-meta.ts (['gemini-embedding-001']), indicating an inconsistency between the two model registries. Both lists should be consolidated and kept in sync.

🤖 Prompt for AI Agents
In docs/adapters/gemini.md around lines 68 to 71, the embedding model registry
is inconsistent: the docs and embed adapter reference `text-embedding-004` while
model-meta.ts only lists `gemini-embedding-001`. Update model-meta.ts to include
`text-embedding-004` in the GEMINI_EMBEDDING_MODELS array (and ensure
GeminiEmbeddingModels enum/type includes it), and modify embed.ts to derive its
supported models from that centralized registry (or import the same constants)
so both places reference the single source of truth and remain in sync.

Comment on lines +327 to +329

// Drain any actions that were queued while the stream was in progress
await this.drainPostStreamActions()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider clearing the queue when stop() is called.

Currently, if a user calls stop() while tool results were queued, those actions will still execute when the finally block drains the queue. This means checkForContinuation could trigger a new stream immediately after the user explicitly stopped the current one.

Consider this addition to the stop() method (line 359):

stop(): void {
  if (this.abortController) {
    this.abortController.abort()
    this.abortController = null
  }
  this.postStreamActions = [] // Clear queued actions
  this.setIsLoading(false)
  this.events.stopped()
}

Similarly, consider clearing the queue in clear() (line 371) to prevent stale actions from executing after messages are cleared.

🤖 Prompt for AI Agents
In packages/typescript/ai-client/src/chat-client.ts around lines 327-329 (and
specifically in the stop() implementation at ~line 359 and clear() at ~line
371), queued postStreamActions are not cleared when a stream is aborted or
messages are cleared, allowing those actions (and checkForContinuation) to run
after the user stopped the stream; update stop() to reset this.postStreamActions
= [] immediately after aborting the controller (and before firing
stopped/setIsLoading), and likewise clear this.postStreamActions = [] in
clear(), ensuring no stale queued actions execute after stop() or clear().

Comment on lines +462 to +467
private async drainPostStreamActions(): Promise<void> {
while (this.postStreamActions.length > 0) {
const action = this.postStreamActions.shift()!
await action()
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling in the drain loop.

If a queued action throws an error, the exception will propagate and prevent subsequent actions from executing. This could leave the system in an inconsistent state where some continuations run but others don't.

Apply this diff to handle errors gracefully:

 private async drainPostStreamActions(): Promise<void> {
   while (this.postStreamActions.length > 0) {
     const action = this.postStreamActions.shift()!
-    await action()
+    try {
+      await action()
+    } catch (error) {
+      // Log but don't block remaining actions
+      console.error('Post-stream action failed:', error)
+      this.setError(error instanceof Error ? error : new Error(String(error)))
+    }
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private async drainPostStreamActions(): Promise<void> {
while (this.postStreamActions.length > 0) {
const action = this.postStreamActions.shift()!
await action()
}
}
private async drainPostStreamActions(): Promise<void> {
while (this.postStreamActions.length > 0) {
const action = this.postStreamActions.shift()!
try {
await action()
} catch (error) {
console.error('Post-stream action failed:', error)
}
}
}
🤖 Prompt for AI Agents
In packages/typescript/ai-client/src/chat-client.ts around lines 462 to 467, the
drainPostStreamActions loop currently awaits each queued action directly which
lets an exception stop the loop and skip remaining actions; wrap the await
action() call in a try/catch so errors from a single action are caught, logged
(use this.logger.error if available, otherwise console.error) and the loop
continues to the next action; optionally collect failures into an array if
callers need a summary, but do not let one action throw and abort the rest.

Comment on lines +78 to +132
// Use Gemini's multimodal content generation with audio output
// Note: This requires the model to support audio output
const voiceConfig = providerOptions?.voiceConfig || {
prebuiltVoiceConfig: {
voiceName: 'Kore', // Default Gemini voice
},
}

const response = await this.client.models.generateContent({
model,
contents: [
{
role: 'user',
parts: [{ text: `Please speak the following text: ${text}` }],
},
],
config: {
// Configure for audio output
responseModalities: ['AUDIO'],
speechConfig: {
voiceConfig,
},
},
})

// Extract audio data from response
const candidate = response.candidates?.[0]
const parts = candidate?.content?.parts

if (!parts || parts.length === 0) {
throw new Error('No audio output received from Gemini TTS')
}

// Look for inline data (audio)
const audioPart = parts.find((part: any) =>
part.inlineData?.mimeType?.startsWith('audio/'),
)

if (!audioPart || !('inlineData' in audioPart)) {
throw new Error('No audio data in Gemini TTS response')
}

const inlineData = (audioPart as any).inlineData
const audioBase64 = inlineData.data
const mimeType = inlineData.mimeType || 'audio/wav'
const format = mimeType.split('/')[1] || 'wav'

return {
id: generateId(this.name),
model,
audio: audioBase64,
format,
contentType: mimeType,
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Default voice can be skipped when voiceConfig is provided without voiceName.
Right now voiceName: 'Kore' is only set when providerOptions?.voiceConfig is falsy. If callers pass { voiceConfig: { prebuiltVoiceConfig: {} } }, voiceName stays undefined.

Possible adjustment:

-    const voiceConfig = providerOptions?.voiceConfig || {
-      prebuiltVoiceConfig: {
-        voiceName: 'Kore', // Default Gemini voice
-      },
-    }
+    const voiceConfig = {
+      prebuiltVoiceConfig: {
+        voiceName:
+          providerOptions?.voiceConfig?.prebuiltVoiceConfig?.voiceName ?? 'Kore',
+      },
+      ...providerOptions?.voiceConfig,
+    }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/typescript/ai-gemini/src/adapters/tts.ts around lines 78 to 132, the
default voiceName 'Kore' is only applied when providerOptions?.voiceConfig is
falsy, so if callers pass providerOptions.voiceConfig.prebuiltVoiceConfig but
omit voiceName it remains undefined; update the voiceConfig construction to
merge providerOptions.voiceConfig with a default prebuiltVoiceConfig.voiceName =
'Kore' (i.e., if prebuiltVoiceConfig exists but has no voiceName, set it to
'Kore'), ensuring you preserve any other provided fields.

Comment on lines +52 to +110
async generateSpeech(
options: TTSOptions<OpenAITTSProviderOptions>,
): Promise<TTSResult> {
const { model, text, voice, format, speed, providerOptions } = options

// Validate inputs using existing validators
const audioOptions = {
input: text,
model,
voice: voice as OpenAITTSVoice,
speed,
response_format: format as OpenAITTSFormat,
...providerOptions,
}

validateAudioInput(audioOptions)
validateSpeed(audioOptions)
validateInstructions(audioOptions)

// Build request
const request: OpenAI_SDK.Audio.SpeechCreateParams = {
model,
input: text,
voice: voice || 'alloy',
response_format: format,
speed,
...providerOptions,
}

// Call OpenAI API
const response = await this.client.audio.speech.create(request)

// Convert response to base64
const arrayBuffer = await response.arrayBuffer()
const base64 = Buffer.from(arrayBuffer).toString('base64')

const outputFormat = format || 'mp3'
const contentType = this.getContentType(outputFormat)

return {
id: generateId(this.name),
model,
audio: base64,
format: outputFormat,
contentType,
}
}

private getContentType(format: string): string {
const contentTypes: Record<string, string> = {
mp3: 'audio/mpeg',
opus: 'audio/opus',
aac: 'audio/aac',
flac: 'audio/flac',
wav: 'audio/wav',
pcm: 'audio/pcm',
}
return contentTypes[format] || 'audio/mpeg'
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check package.json to see if this is an isomorphic package
cat packages/typescript/ai-openai/package.json | head -50

Repository: TanStack/ai

Length of output: 1163


🏁 Script executed:

# Search for existing isomorphic base64 utilities in the repo
rg "arrayBuffer.*[Bb]ase64|[Bb]ase64.*arrayBuffer" --type ts --type tsx

Repository: TanStack/ai

Length of output: 82


🏁 Script executed:

# Check if there are other Buffer usages in adapters that might indicate the pattern already established
rg "Buffer\.from" --type ts --type tsx

Repository: TanStack/ai

Length of output: 82


🏁 Script executed:

# Check for any isomorphic utility files or helpers
fd -e ts -e tsx | xargs grep -l "typeof Buffer\|btoa\|atob" | head -20

Repository: TanStack/ai

Length of output: 181


🏁 Script executed:

# Look at the actual file to see context
cat -n packages/typescript/ai-openai/src/adapters/tts.ts | sed -n '45,120p'

Repository: TanStack/ai

Length of output: 2486


🏁 Script executed:

# Check transcription adapter for Buffer usage
cat -n packages/typescript/ai-openai/src/adapters/transcription.ts | grep -A 3 -B 3 "Buffer"

Repository: TanStack/ai

Length of output: 509


🏁 Script executed:

# Look for utilities or helpers in the core @tanstack/ai package
find packages/typescript/ai/src -type f -name "*util*" -o -name "*helper*" | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search for any existing base64 or arrayBuffer handling in core package
rg "base64|arrayBuffer|Buffer" packages/typescript/ai/src --type ts -A 2 -B 2 | head -50

Repository: TanStack/ai

Length of output: 3500


🏁 Script executed:

# Check if there's a shared utilities folder or isomorphic utils
fd -e ts "util|helper|iso" packages/typescript/ai/src | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the build/vite config to see if there are any isomorphic/browser targets
cat packages/typescript/ai-openai/vite.config.ts 2>/dev/null || cat packages/typescript/ai-openai/vite.config.js 2>/dev/null || echo "No vite config found"

Repository: TanStack/ai

Length of output: 827


🏁 Script executed:

# Check testing/panel useTTS hook
cat -n testing/panel/src/hooks/useTTS.ts

Repository: TanStack/ai

Length of output: 2571


🏁 Script executed:

# Check the tts route component
cat -n testing/panel/src/routes/tts.tsx | head -100

Repository: TanStack/ai

Length of output: 3505


Use an isomorphic base64 conversion instead of Buffer.from().

The adapter calls Buffer.from(arrayBuffer).toString('base64') at line 86, which will fail in browser environments. The testing panel's useTTS hook (line 39) expects base64-encoded audio from the TTS API endpoint, and while the current code runs server-side, this pattern prevents future isomorphic usage or client-side execution (e.g., in RSC or edge runtimes). Replace with an isomorphic helper:

-    const arrayBuffer = await response.arrayBuffer()
-    const base64 = Buffer.from(arrayBuffer).toString('base64')
+    const arrayBuffer = await response.arrayBuffer()
+    const base64 = arrayBufferToBase64(arrayBuffer)

Add the helper function:

+function arrayBufferToBase64(arrayBuffer: ArrayBuffer): string {
+  if (typeof Buffer !== 'undefined') {
+    return Buffer.from(arrayBuffer).toString('base64')
+  }
+  let binary = ''
+  const bytes = new Uint8Array(arrayBuffer)
+  for (let i = 0; i < bytes.length; i++) binary += String.fromCharCode(bytes[i]!)
+  return btoa(binary)
+}
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/tts.ts around lines 52 to 110, the
code uses Buffer.from(arrayBuffer).toString('base64') which breaks in
browser/edge runtimes; replace that call with an isomorphic helper that converts
an ArrayBuffer/Uint8Array to base64 (e.g., a small function that checks for
globalThis.btoa and TextDecoder/Blob usage or falls back to Buffer in Node), add
the helper in this file (or import from a shared utils) and call it like const
base64 = arrayBufferToBase64(arrayBuffer); update the generateSpeech return to
use the helper and ensure typing accepts ArrayBuffer | Uint8Array for robust
conversion.

Comment on lines +213 to +214
const baseUrl = this.config.baseUrl || 'https://api.openai.com/v1'
const apiKey = this.config.apiKey
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Verify config property name: baseUrl vs baseURL.

Line 213 accesses this.config.baseUrl but OpenAI SDK configuration typically uses baseURL (uppercase URL). This could result in the fallback URL always defaulting to 'https://api.openai.com/v1' even when a custom base URL is configured.

#!/bin/bash
# Check the OpenAIClientConfig interface for the correct property name
rg -n "baseUrl|baseURL" packages/typescript/ai-openai/src/utils.ts
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/video.ts around lines 213 to 214,
the code reads this.config.baseUrl which likely mismatches the
OpenAIClientConfig property name (baseURL); update the access to use
this.config.baseURL (or check the config interface and normalize both names) so
a provided custom base URL is respected instead of always falling back to the
hardcoded 'https://api.openai.com/v1'; ensure the change aligns with the
OpenAIClientConfig definition in utils.ts and add a safe fallback if neither
property is present.

Comment on lines +241 to +252
// The response is the raw video file - convert to base64 data URL
const videoBlob = await contentResponse.blob()
const buffer = await videoBlob.arrayBuffer()
const base64 = Buffer.from(buffer).toString('base64')
const mimeType =
contentResponse.headers.get('content-type') || 'video/mp4'

return {
jobId,
url: `data:${mimeType};base64,${base64}`,
expiresAt: undefined, // Data URLs don't expire
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Buffer is not available in browser environments.

Buffer.from() is a Node.js API and will throw ReferenceError: Buffer is not defined in browser environments. For isomorphic code, consider using a cross-platform approach.

-        const base64 = Buffer.from(buffer).toString('base64')
+        // Cross-platform base64 encoding
+        const base64 = typeof Buffer !== 'undefined'
+          ? Buffer.from(buffer).toString('base64')
+          : btoa(String.fromCharCode(...new Uint8Array(buffer)))

Alternatively, consider using a utility like uint8array-extras or checking if video URL retrieval should be server-only.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// The response is the raw video file - convert to base64 data URL
const videoBlob = await contentResponse.blob()
const buffer = await videoBlob.arrayBuffer()
const base64 = Buffer.from(buffer).toString('base64')
const mimeType =
contentResponse.headers.get('content-type') || 'video/mp4'
return {
jobId,
url: `data:${mimeType};base64,${base64}`,
expiresAt: undefined, // Data URLs don't expire
}
// The response is the raw video file - convert to base64 data URL
const videoBlob = await contentResponse.blob()
const buffer = await videoBlob.arrayBuffer()
// Cross-platform base64 encoding
const base64 = typeof Buffer !== 'undefined'
? Buffer.from(buffer).toString('base64')
: btoa(String.fromCharCode(...new Uint8Array(buffer)))
const mimeType =
contentResponse.headers.get('content-type') || 'video/mp4'
return {
jobId,
url: `data:${mimeType};base64,${base64}`,
expiresAt: undefined, // Data URLs don't expire
}
🤖 Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/video.ts around lines 241 to 252,
the code uses Buffer.from(...) which breaks in browser environments; replace
that Node-only call with an isomorphic conversion: after getting the
ArrayBuffer, branch on whether globalThis.Buffer exists and use
Buffer.from(buffer).toString('base64') for Node, otherwise convert the
Uint8Array to a binary string (e.g. by iterating or chunking to avoid stack
issues) and call btoa(binaryString) to produce the base64 string (or use
FileReader.readAsDataURL to get a data URL in browsers), keep the existing
mimeType handling and return shape, and ensure no Buffer references remain when
running in the browser.

Comment on lines +52 to +60
/** The audio data to transcribe - can be base64 string, File, Blob, or Buffer */
audio: string | File | Blob | ArrayBuffer
/** The language of the audio in ISO-639-1 format (e.g., 'en') */
language?: string
/** An optional prompt to guide the transcription */
prompt?: string
/** The format of the transcription output */
responseFormat?: 'json' | 'text' | 'srt' | 'verbose_json' | 'vtt'
/** Provider-specific options for transcription */
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix JSDoc/type mismatch for audio (“Buffer” isn’t in the type).

Either add Buffer to the actual audio type (if supported), or remove “Buffer” from JSDoc/examples to match string | File | Blob | ArrayBuffer.

Also applies to: 80-90

🤖 Prompt for AI Agents
packages/typescript/ai/src/activities/transcription/index.ts around lines 52-60
(and similarly lines 80-90): the JSDoc mentions "Buffer" but the TypeScript type
is string | File | Blob | ArrayBuffer, causing a mismatch; update the TypeScript
signature to include Node's Buffer (e.g., string | File | Blob | ArrayBuffer |
Buffer) if Buffer is intended to be supported, or remove "Buffer" from the
JSDoc/examples to match the existing type — ensure both JSDoc and the TS type
are consistent across the file (and update any related overloads or tests
accordingly).

Comment on lines +47 to +52
export type GenerateAdapter =
| TextAdapter<ReadonlyArray<string>, object, any, any, any>
| EmbeddingAdapter<ReadonlyArray<string>, object>
| SummarizeAdapter<ReadonlyArray<string>, object>
| ImageAdapter<ReadonlyArray<string>, object, any, any>
| VideoAdapter<ReadonlyArray<string>, object>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, find all adapter definitions in the activities directory
find packages/typescript/ai/src/activities -name "adapter.ts" -o -name "adapter.d.ts" | head -20

Repository: TanStack/ai

Length of output: 452


🏁 Script executed:

# Check the structure of activities directory
fd -t d "activities" packages/typescript/ai/src/ --exec find {} -type f -name "*.ts" \;

Repository: TanStack/ai

Length of output: 1583


🏁 Script executed:

# Search for TTS and Transcription adapter definitions
rg "export.*(?:TTS|Transcription)Adapter" packages/typescript/ai/src/ -A 2

Repository: TanStack/ai

Length of output: 2254


🏁 Script executed:

# Check the current GenerateAdapter definition and imports
cat -n packages/typescript/ai/src/ai.ts | head -60

Repository: TanStack/ai

Length of output: 2173


🏁 Script executed:

# Look for AnyAIAdapter definition to see what it includes
rg "type AnyAIAdapter|export type.*AnyAIAdapter" packages/typescript/ai/src/ -A 5

Repository: TanStack/ai

Length of output: 612


🏁 Script executed:

# Get full AnyAIAdapter definition
rg "export type AnyAIAdapter" packages/typescript/ai/src/activities/index.ts -A 20

Repository: TanStack/ai

Length of output: 988


🏁 Script executed:

# Check the activityMap in activities/index.ts
rg "const activityMap|export.*activityMap" packages/typescript/ai/src/activities/index.ts -A 10

Repository: TanStack/ai

Length of output: 422


🏁 Script executed:

# Check what activities are defined and exported
cat packages/typescript/ai/src/activities/index.ts | head -100

Repository: TanStack/ai

Length of output: 2948


🏁 Script executed:

# Check the ai() function signature
rg "export.*function ai|export const ai" packages/typescript/ai/src/ai.ts -A 15

Repository: TanStack/ai

Length of output: 833


🏁 Script executed:

# Check what imports are in ai.ts from activities
rg "^import.*from.*activities" packages/typescript/ai/src/ai.ts

Repository: TanStack/ai

Length of output: 426


Add missing TTS and Transcription adapters to GenerateAdapter union type.

The GenerateAdapter union (lines 47-52) is missing TTSAdapter and TranscriptionAdapter, even though the ai() function is typed to accept AnyAIAdapter which includes these adapters, and the activityMap supports routing to both ttsActivity and transcriptionActivity. This creates a type consistency gap where the public GenerateAdapter export doesn't reflect all adapter types the function actually handles.

+import type { TTSAdapter } from './activities/tts/adapter'
+import type { TranscriptionAdapter } from './activities/transcription/adapter'

 /** Union of all adapter types that can be passed to ai() */
 export type GenerateAdapter =
   | TextAdapter<ReadonlyArray<string>, object, any, any, any>
   | EmbeddingAdapter<ReadonlyArray<string>, object>
   | SummarizeAdapter<ReadonlyArray<string>, object>
   | ImageAdapter<ReadonlyArray<string>, object, any, any>
   | VideoAdapter<ReadonlyArray<string>, object>
+  | TTSAdapter<ReadonlyArray<string>, object>
+  | TranscriptionAdapter<ReadonlyArray<string>, object>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export type GenerateAdapter =
| TextAdapter<ReadonlyArray<string>, object, any, any, any>
| EmbeddingAdapter<ReadonlyArray<string>, object>
| SummarizeAdapter<ReadonlyArray<string>, object>
| ImageAdapter<ReadonlyArray<string>, object, any, any>
| VideoAdapter<ReadonlyArray<string>, object>
import type { TextAdapter } from './activities/text/adapter'
import type { EmbeddingAdapter } from './activities/embedding/adapter'
import type { SummarizeAdapter } from './activities/summarize/adapter'
import type { ImageAdapter } from './activities/image/adapter'
import type { VideoAdapter } from './activities/video/adapter'
import type { TTSAdapter } from './activities/tts/adapter'
import type { TranscriptionAdapter } from './activities/transcription/adapter'
/** Union of all adapter types that can be passed to ai() */
export type GenerateAdapter =
| TextAdapter<ReadonlyArray<string>, object, any, any, any>
| EmbeddingAdapter<ReadonlyArray<string>, object>
| SummarizeAdapter<ReadonlyArray<string>, object>
| ImageAdapter<ReadonlyArray<string>, object, any, any>
| VideoAdapter<ReadonlyArray<string>, object>
| TTSAdapter<ReadonlyArray<string>, object>
| TranscriptionAdapter<ReadonlyArray<string>, object>
🤖 Prompt for AI Agents
In packages/typescript/ai/src/ai.ts around lines 47 to 52, the GenerateAdapter
union is missing TTSAdapter and TranscriptionAdapter which causes a type gap;
update the exported GenerateAdapter union to include the TTSAdapter and
TranscriptionAdapter variants with the correct generic parameter shapes used
elsewhere (match the existing TTSAdapter<TInput, TOptions, TStream?> and
TranscriptionAdapter<TInput, TOptions> signatures in the codebase), and add or
adjust imports if necessary so those adapter types are referenced; ensure the
final union mirrors AnyAIAdapter’s supported adapter kinds so ai() consumers see
consistent types.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants