-
-
Notifications
You must be signed in to change notification settings - Fork 72
fix: unresponsive chat after client tool execution #147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: unresponsive chat after client tool execution #147
Conversation
WalkthroughStreaming now tracks successful completion; after a successful stream, if the final assistant part is a Changes
Sequence DiagramsequenceDiagram
actor User
participant ChatClient
participant StreamResponse
participant ContinueFlow
participant Processor
User->>ChatClient: send message / trigger response
ChatClient->>StreamResponse: start streaming response
StreamResponse-->>ChatClient: emit chunks / final message (may include tool-result)
Note over StreamResponse: finally block runs after stream ends
StreamResponse->>StreamResponse: set/check `streamCompletedSuccessfully` and inspect last message parts
alt last part is tool-result and auto-send enabled
StreamResponse->>ContinueFlow: call continueFlow()
ContinueFlow->>Processor: evaluate areAllToolsComplete()
Processor->>Processor: collect `tool-result` parts, build completed `toolCallId` set
Processor-->>ContinueFlow: return completion status
ContinueFlow-->>ChatClient: initiate continuation
ChatClient->>User: send continued request/response
else
StreamResponse-->>ChatClient: end without continuation
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–25 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
packages/typescript/ai/src/stream/processor.ts (1)
358-364: Improve type safety for tool-result part filtering.The type assertion on line 363 bypasses TypeScript's type checking. If the
tool-resultpart structure changes ortoolCallIdis undefined, this could cause runtime errors.Consider using a type predicate or optional chaining:
// Check for server tool completions via tool-result parts const toolResultParts = lastAssistant.parts.filter( (p) => p.type === 'tool-result', ) const completedToolCallIds = new Set( - toolResultParts.map((p) => (p as { toolCallId: string }).toolCallId), + toolResultParts + .map((p) => ('toolCallId' in p ? p.toolCallId : null)) + .filter((id): id is string => id !== null), )Alternatively, if a
ToolResultParttype exists, use a proper type predicate:const toolResultParts = lastAssistant.parts.filter( (p): p is ToolResultPart => p.type === 'tool-result', ) const completedToolCallIds = new Set( toolResultParts.map((p) => p.toolCallId), )packages/typescript/ai-client/src/chat-client.ts (1)
334-334: Consider explicit error handling for clarity.While errors from
continueFlow()should be caught by the innerstreamResponsecall's try-catch, adding explicit error handling in the finally block would make the error handling more obvious and prevent potential unhandled promise rejections.Apply this diff to add explicit error handling:
if (lastPart?.type === 'tool-result' && this.shouldAutoSend()) { - await this.continueFlow() + try { + await this.continueFlow() + } catch (err) { + // Error already handled by streamResponse, but log for safety + console.error('Unexpected error during auto-continue:', err) + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/typescript/ai-client/src/chat-client.ts(2 hunks)packages/typescript/ai/src/stream/processor.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
packages/typescript/ai-client/src/chat-client.tspackages/typescript/ai/src/stream/processor.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
packages/typescript/ai-client/src/chat-client.tspackages/typescript/ai/src/stream/processor.ts
🧠 Learnings (2)
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Applied to files:
packages/typescript/ai-client/src/chat-client.tspackages/typescript/ai/src/stream/processor.ts
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement isomorphic tool system using `toolDefinition()` with `.server()` and `.client()` implementations for dual-environment execution
Applied to files:
packages/typescript/ai/src/stream/processor.ts
🔇 Additional comments (5)
packages/typescript/ai/src/stream/processor.ts (2)
370-370: Correctly handles approval flow for client tools.The added
!part.approvalcheck ensures that client tools requiring approval are not considered complete until they reach the'approval-responded'state. This prevents premature completion detection and aligns with the intended approval flow.
371-371: Enables auto-continuation for server-side tool completions.This condition correctly detects when a server-side tool has completed by checking for corresponding
tool-resultparts. Combined with the other conditions, this ensures all tool completion scenarios are covered (client tools with/without approval, and server tools).packages/typescript/ai-client/src/chat-client.ts (3)
31-41: LGTM! Clean callback management pattern.The ref-style structure for callbacks enables dynamic updates through
updateOptionswithout requiring ChatClient recreation. This is particularly useful for framework integrations where callbacks might change frequently.
57-67: LGTM! Proper initialization with safe fallbacks.The callback initialization correctly provides no-op fallbacks, ensuring all callbacks are always safe to invoke throughout the class lifecycle.
327-336: Tool-result detection and auto-continue flow is correct.The implementation properly handles the case where
addToolResultis called during streaming. When each tool result arrives, it's appended to the last message's parts array viaaddToolResult(). SinceshouldAutoSend()verifies that all tool calls have corresponding results (viaareAllToolsComplete()), checking only the last part type in combination with this completeness check is sufficient—if the last part istool-resultand all tools are complete, the conversation can safely continue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/typescript/ai-client/src/chat-client.ts(2 hunks)packages/typescript/ai/src/stream/processor.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/typescript/ai/src/stream/processor.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
packages/typescript/ai-client/src/chat-client.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
packages/typescript/ai-client/src/chat-client.ts
🧠 Learnings (1)
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Applied to files:
packages/typescript/ai-client/src/chat-client.ts
🔇 Additional comments (2)
packages/typescript/ai-client/src/chat-client.ts (2)
31-41: LGTM! Clean callback centralization.The ref-based callback structure allows for dynamic updates without recreating the processor, which aligns well with the
updateOptionsmethod at lines 523-534.
57-67: LGTM! Proper initialization with safe defaults.All callbacks are properly initialized with no-op fallbacks, preventing potential undefined errors when callbacks are invoked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/typescript/ai-client/src/chat-client.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
packages/typescript/ai-client/src/chat-client.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
packages/typescript/ai-client/src/chat-client.ts
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
📚 Learning: 2025-12-13T17:09:09.783Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.783Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Applied to files:
packages/typescript/ai-client/src/chat-client.ts
🔇 Additional comments (2)
packages/typescript/ai-client/src/chat-client.ts (2)
293-293: LGTM: Previous issue resolved.The
streamCompletedSuccessfullyflag correctly addresses the previous review concern, ensuring auto-continue only runs after successful stream completion and not after abort or error.Also applies to: 316-316
330-342: The processor implementation correctly handles all three scenarios.StreamProcessor.areAllToolsComplete()(lines 345-374) properly validates tool-result parts: it filters tool-result parts from the last assistant message (lines 359-362), collects completed tool IDs, and ensures all tool-call parts are in terminal state—either approval-responded, having output without approval, or matching a completed tool-result ID (lines 368-373). Test coverage at lines 717-724 confirms client-side tool results work correctly, and edge cases including mixed scenarios are tested. The integration withChatClient.shouldAutoSend()at line 471 is properly aligned with the processor's completion logic.
🎯 Changes
The conversation becomes unresponsive after a client-side tool execution, as the LLM fails to receive the tool result effectively.
StreamProcessor.areAllToolsCompleteto check fortool-resultparts (server tools) in addition tooutputfields (client tools).ChatClient.streamResponseto check if the stream ended with atool-resultpart. If so, it automatically triggerscontinueFlow()to send the result back to the LLM. This handles cases whereaddToolResultwas called while the stream was still active (and thus blocked from continuing immediately).✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
Bug Fixes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.