Skip to content

Conversation

@harlley
Copy link

@harlley harlley commented Dec 13, 2025

🎯 Changes

The conversation becomes unresponsive after a client-side tool execution, as the LLM fails to receive the tool result effectively.

  • Updated StreamProcessor.areAllToolsComplete to check for tool-result parts (server tools) in addition to output fields (client tools).
  • Updated ChatClient.streamResponse to check if the stream ended with a tool-result part. If so, it automatically triggers continueFlow() to send the result back to the LLM. This handles cases where addToolResult was called while the stream was still active (and thus blocked from continuing immediately).

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • Bug Fixes

    • Better detection of tool execution completion so server-side tool results mark calls as finished, reducing stalled actions.
    • Streaming responses now, after a successful stream, automatically continue the conversation when the last streamed item is a tool result and auto-send is enabled; continuation errors are logged but do not interrupt the flow.
  • Chores

    • Patch version metadata updated for two packages.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 13, 2025

Walkthrough

Streaming now tracks successful completion; after a successful stream, if the final assistant part is a tool-result and auto-send is enabled, the client attempts to continue the conversation. The stream processor also treats server-side tool-result parts as signals that corresponding tool calls are complete.

Changes

Cohort / File(s) Summary
Chat client: stream finalization
packages/typescript/ai-client/src/chat-client.ts
Add local streamCompletedSuccessfully flag in streamResponse. In the finally block, if the flag is true and the last assistant part is a tool-result and auto-send is enabled, call continueFlow() and log any errors.
Stream processor: tool completion detection
packages/typescript/ai/src/stream/processor.ts
Update areAllToolsComplete to inspect tool-result parts in the last assistant message, collect toolCallIds from those parts, and consider tool calls complete if their IDs are present in that set (in addition to existing completion checks).
Versioning metadata
.changeset/fix-silent-chat-continuation.md
Add changeset marking patch bumps for @tanstack/ai and @tanstack/ai-client with message: "fix: Continue conversation after client tool execution."

Sequence Diagram

sequenceDiagram
    actor User
    participant ChatClient
    participant StreamResponse
    participant ContinueFlow
    participant Processor

    User->>ChatClient: send message / trigger response
    ChatClient->>StreamResponse: start streaming response
    StreamResponse-->>ChatClient: emit chunks / final message (may include tool-result)

    Note over StreamResponse: finally block runs after stream ends
    StreamResponse->>StreamResponse: set/check `streamCompletedSuccessfully` and inspect last message parts

    alt last part is tool-result and auto-send enabled
        StreamResponse->>ContinueFlow: call continueFlow()
        ContinueFlow->>Processor: evaluate areAllToolsComplete()
        Processor->>Processor: collect `tool-result` parts, build completed `toolCallId` set
        Processor-->>ContinueFlow: return completion status
        ContinueFlow-->>ChatClient: initiate continuation
        ChatClient->>User: send continued request/response
    else
        StreamResponse-->>ChatClient: end without continuation
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–25 minutes

  • Inspect stream finalizer for race conditions or duplicate continuation triggers.
  • Verify logic that selects the "last" message/part and handling when toolCallId or part arrays are missing.
  • Confirm error handling/logging around continueFlow() is appropriate.

Poem

🐰 I hopped through streams with watchful eye,
Found tool-results gleaming by and by,
Kept a flag to know the end was true,
Whispered "continue" — the chat carried through,
Nibbled a carrot, then bounced on anew. 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main issue being fixed: an unresponsive chat after client tool execution, which aligns with the primary changes in the changeset.
Description check ✅ Passed The description includes all required template sections with substantive content: detailed changes explaining the problem and solution, completed checklist items, and confirmed changeset generation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1392d00 and 891fe8a.

📒 Files selected for processing (1)
  • packages/typescript/ai-client/src/chat-client.ts (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai-client/src/chat-client.ts

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
packages/typescript/ai/src/stream/processor.ts (1)

358-364: Improve type safety for tool-result part filtering.

The type assertion on line 363 bypasses TypeScript's type checking. If the tool-result part structure changes or toolCallId is undefined, this could cause runtime errors.

Consider using a type predicate or optional chaining:

 // Check for server tool completions via tool-result parts
 const toolResultParts = lastAssistant.parts.filter(
   (p) => p.type === 'tool-result',
 )
 const completedToolCallIds = new Set(
-  toolResultParts.map((p) => (p as { toolCallId: string }).toolCallId),
+  toolResultParts
+    .map((p) => ('toolCallId' in p ? p.toolCallId : null))
+    .filter((id): id is string => id !== null),
 )

Alternatively, if a ToolResultPart type exists, use a proper type predicate:

const toolResultParts = lastAssistant.parts.filter(
  (p): p is ToolResultPart => p.type === 'tool-result',
)
const completedToolCallIds = new Set(
  toolResultParts.map((p) => p.toolCallId),
)
packages/typescript/ai-client/src/chat-client.ts (1)

334-334: Consider explicit error handling for clarity.

While errors from continueFlow() should be caught by the inner streamResponse call's try-catch, adding explicit error handling in the finally block would make the error handling more obvious and prevent potential unhandled promise rejections.

Apply this diff to add explicit error handling:

       if (lastPart?.type === 'tool-result' && this.shouldAutoSend()) {
-        await this.continueFlow()
+        try {
+          await this.continueFlow()
+        } catch (err) {
+          // Error already handled by streamResponse, but log for safety
+          console.error('Unexpected error during auto-continue:', err)
+        }
       }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e011bf8 and 89643f9.

📒 Files selected for processing (2)
  • packages/typescript/ai-client/src/chat-client.ts (2 hunks)
  • packages/typescript/ai/src/stream/processor.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai/src/stream/processor.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai/src/stream/processor.ts
🧠 Learnings (2)
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • packages/typescript/ai-client/src/chat-client.ts
  • packages/typescript/ai/src/stream/processor.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement isomorphic tool system using `toolDefinition()` with `.server()` and `.client()` implementations for dual-environment execution

Applied to files:

  • packages/typescript/ai/src/stream/processor.ts
🔇 Additional comments (5)
packages/typescript/ai/src/stream/processor.ts (2)

370-370: Correctly handles approval flow for client tools.

The added !part.approval check ensures that client tools requiring approval are not considered complete until they reach the 'approval-responded' state. This prevents premature completion detection and aligns with the intended approval flow.


371-371: Enables auto-continuation for server-side tool completions.

This condition correctly detects when a server-side tool has completed by checking for corresponding tool-result parts. Combined with the other conditions, this ensures all tool completion scenarios are covered (client tools with/without approval, and server tools).

packages/typescript/ai-client/src/chat-client.ts (3)

31-41: LGTM! Clean callback management pattern.

The ref-style structure for callbacks enables dynamic updates through updateOptions without requiring ChatClient recreation. This is particularly useful for framework integrations where callbacks might change frequently.


57-67: LGTM! Proper initialization with safe fallbacks.

The callback initialization correctly provides no-op fallbacks, ensuring all callbacks are always safe to invoke throughout the class lifecycle.


327-336: Tool-result detection and auto-continue flow is correct.

The implementation properly handles the case where addToolResult is called during streaming. When each tool result arrives, it's appended to the last message's parts array via addToolResult(). Since shouldAutoSend() verifies that all tool calls have corresponding results (via areAllToolsComplete()), checking only the last part type in combination with this completeness check is sufficient—if the last part is tool-result and all tools are complete, the conversation can safely continue.

@harlley harlley changed the title fix: Continue conversation after client tool execution fix: unresponsive chat after client tool execution Dec 13, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 081b263 and 8e8283e.

📒 Files selected for processing (2)
  • packages/typescript/ai-client/src/chat-client.ts (2 hunks)
  • packages/typescript/ai/src/stream/processor.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai/src/stream/processor.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-client/src/chat-client.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-client/src/chat-client.ts
🧠 Learnings (1)
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • packages/typescript/ai-client/src/chat-client.ts
🔇 Additional comments (2)
packages/typescript/ai-client/src/chat-client.ts (2)

31-41: LGTM! Clean callback centralization.

The ref-based callback structure allows for dynamic updates without recreating the processor, which aligns well with the updateOptions method at lines 523-534.


57-67: LGTM! Proper initialization with safe defaults.

All callbacks are properly initialized with no-op fallbacks, preventing potential undefined errors when callbacks are invoked.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8e8283e and e0142c2.

📒 Files selected for processing (1)
  • packages/typescript/ai-client/src/chat-client.ts (3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-client/src/chat-client.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-client/src/chat-client.ts
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • packages/typescript/ai-client/src/chat-client.ts
🔇 Additional comments (2)
packages/typescript/ai-client/src/chat-client.ts (2)

293-293: LGTM: Previous issue resolved.

The streamCompletedSuccessfully flag correctly addresses the previous review concern, ensuring auto-continue only runs after successful stream completion and not after abort or error.

Also applies to: 316-316


330-342: The processor implementation correctly handles all three scenarios. StreamProcessor.areAllToolsComplete() (lines 345-374) properly validates tool-result parts: it filters tool-result parts from the last assistant message (lines 359-362), collects completed tool IDs, and ensures all tool-call parts are in terminal state—either approval-responded, having output without approval, or matching a completed tool-result ID (lines 368-373). Test coverage at lines 717-724 confirms client-side tool results work correctly, and edge cases including mixed scenarios are tested. The integration with ChatClient.shouldAutoSend() at line 471 is properly aligned with the processor's completion logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant