-
-
Notifications
You must be signed in to change notification settings - Fork 12k
[Bugfix] Drop empty tool_calls lists to keep assistant replies in chat template #30648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Drop empty tool_calls lists to keep assistant replies in chat template #30648
Conversation
…plates Signed-off-by: Seokhyun An <iamseokhyun@gmail.com>
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a bugfix to correctly handle empty tool_calls lists in assistant messages. The change in _postprocess_messages prevents chat templates from misinterpreting these messages as tool calls, which previously caused the assistant's text content to be dropped. The implementation is correct and robust, safely removing the empty tool_calls list while leaving non-empty ones unaffected. The provided test plan thoroughly demonstrates the issue and validates the fix. The change is well-targeted and improves the reliability of chat processing.
chaunceyjiang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks~
…t template (vllm-project#30648) Signed-off-by: Seokhyun An <iamseokhyun@gmail.com>
…t template (vllm-project#30648) Signed-off-by: Seokhyun An <iamseokhyun@gmail.com> Signed-off-by: Joachim Studnia <joachim@mistral.ai>
…t template (vllm-project#30648) Signed-off-by: Seokhyun An <iamseokhyun@gmail.com>
Purpose
Summary
tool_callslists.Problem
gpt-ossvia the vllm serve Chat API,model_response.choices[0].message.model_dump(exclude_none=True)includestool_calls=[].Fix
_postprocess_messagesinvllm/entrypoints/chat_utils.pyto remove empty assistanttool_callsbefore argument normalization.Test Plan
Test code
vllm serve command
Test Result
Before fix
After fix
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.