Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Jan 29, 2026

This PR attempts to address Issue #11071 by adding diagnostic logging to help users verify GLM model detection.

Problem

Users testing PR #11077 cannot verify whether:

  • Their model ID is being correctly identified as a GLM model
  • The GLM-specific optimizations are being applied
  • They are using the correct custom build vs. marketplace version

Solution

Added console logging at key points:

  1. Model ID logging: Logs the model ID being used when LM Studio or OpenAI-compatible providers initialize
  2. GLM detection results: Logs when a GLM model is detected with specific optimization settings
  3. Applied optimization values: Logs the parallel_tool_calls value being sent to the API

Example Output

When using a GLM model like glm-4.5:

[LM Studio] Using model ID: "glm-4.5"
[GLM Detection] ✓ GLM model detected: "glm-4.5"
[GLM Detection]   - mergeToolResultText: true
[GLM Detection]   - disableParallelToolCalls: true
[LM Studio] parallel_tool_calls set to: false

When using a non-GLM model:

[LM Studio] Using model ID: "llama-3-8b"
[LM Studio] parallel_tool_calls set to: true

Testing

  • ✅ All existing tests pass
  • ✅ Type checking passes
  • ✅ Linting passes

Notes

Feedback and guidance are welcome!


Important

Adds diagnostic logging for GLM model detection and optimizations, with utility functions and tests for model identification and option retrieval.

  • Behavior:
    • Adds diagnostic logging for GLM model detection in base-openai-compatible-provider.ts and lm-studio.ts.
    • Logs model ID, GLM detection results, and parallel_tool_calls value.
  • Utilities:
    • Adds isGlmModel() and getGlmModelOptions() in model-detection.ts for GLM model detection and option retrieval.
  • Testing:
    • Adds tests in model-detection.spec.ts for GLM model detection and option retrieval.

This description was created by Ellipsis for 2c5d905. You can customize this summary. It will automatically update as commits are pushed.

…viders

This adds automatic GLM model detection for third-party providers, enabling
the same optimizations that Z.ai uses for GLM models:

1. Created isGlmModel() utility function that detects GLM model IDs
2. Created getGlmModelOptions() to get model-specific configuration
3. Modified LM Studio provider to detect GLM models and apply:
   - mergeToolResultText option to prevent dropping reasoning_content
   - disabled parallel_tool_calls by default for GLM models
4. Modified BaseOpenAiCompatibleProvider with the same GLM handling

This addresses issue #11071 questions about GLM model detection and
ensuring Z.ai improvements are available to LM Studio and OpenAI-compatible
endpoints running GLM models.
- Add console logging in getGlmModelOptions() to show when a GLM model is detected
- Log model ID being used in LM Studio and OpenAI-compatible providers
- Log parallel_tool_calls value being applied
- Helps users verify GLM detection is working correctly

Addresses issue #11071 where users cannot verify if GLM detection is functioning
@roomote
Copy link
Contributor Author

roomote bot commented Jan 29, 2026

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues found.

The implementation correctly:

  • Detects GLM models using pattern matching (glm-, glm4, chatglm)
  • Applies appropriate options (mergeToolResultText, disableParallelToolCalls)
  • Integrates with both base-openai-compatible-provider.ts and lm-studio.ts
  • Has comprehensive test coverage for model detection edge cases

Diagnostic logging output matches the expected format from the PR description.

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

No open projects
Status: Triage

Development

Successfully merging this pull request may close these issues.

1 participant