GPT Commander is a security-first MCP server that gives LLMs a safe, structured way to operate on a developer environment. It provides filesystem, process, and search tools; opinionated build/test/lint primitives; optional IDE-native context via VS Code; and integrations for CLI runners and memory stores.
This repo is intentionally "LLM-native": the tool surface is constrained, named, and predictable so models can operate reliably without improvising shell commands.
- LLMs are best when the action space is small, stable, and explicit.
- Most existing "terminal helpers" are too permissive or too brittle.
- The difference between "helper" and "pair programmer" is IDE context and safe primitives.
- Security guardrails: path allowlist, command blocklist, read/write limits, binary file sniffing.
- Process management: start, interact, read output, kill; TTL auto-prunes long-running sessions.
- Search + edits: ripgrep-backed search + targeted block replacement.
- Opinionated primitives:
run_tests,lint,build,typecheck,dev_server_start, etc. - IDE bridge (VS Code): diagnostics, symbols, references, workspace edits, and unsaved buffers.
- CLI wrappers: safe runners for Codex, Claude, and Gemini CLIs.
- Memory MCP: a
memory_notetool that stores ad hoc notes externally. - Telemetry: optional tool/command metrics + JSONL event log.
- Onboarding gate: mandatory obstacle course to ensure the model is actually calling tools correctly.
- HTTP/SSE or stdio: run as a local stdio MCP or as a URL-based SSE MCP.
- Node.js >= 18
- ripgrep (
rg) on PATH (recommended forstart_search)
npm install
npm run buildAdd to your MCP config (e.g. GPT Desktop, Claude Desktop, Codex, etc.):
{
"mcpServers": {
"gpt-commander": {
"command": "node",
"args": ["/path/to/dist/index.js"],
"env": {
"GPT_DC_CONFIG": "/optional/custom/config.json"
}
}
}
}npm run build
node dist/sse.jsThen point your MCP client at:
http://127.0.0.1:3333/mcp
If you are using a reverse proxy or tailnet URL, set an absolute endpoint:
GPT_DC_HOST=127.0.0.1 GPT_DC_PORT=3333 \
GPT_DC_ABSOLUTE_ENDPOINT="https://your-host.example/desktop" \
node dist/sse.jsThis repo includes a standalone MCP server that exposes local Ollama via tools.
npm run build
OLLAMA_HOST="http://127.0.0.1:11434" node dist/ollama-mcp.jsTools:
ollama_list_modelsollama_show_modelollama_generateollama_chat
Optional env:
OLLAMA_HOST(defaulthttp://127.0.0.1:11434)OLLAMA_TIMEOUT_MS(default120000)
OpenWebUI Function plugin that bridges to Commander MCP with a minimal tool list plus CLI wrappers.
- File:
openwebui/commander_mcp_bridge.py - Setup:
openwebui/README.md
Default config is created at ~/.gpt-desktop-commander/config.json.
{
"allowedDirectories": ["$HOME"],
"blockedCommands": ["rm", "shutdown", "reboot", "mkfs", "mount", "umount"],
"fileReadLimitBytes": 10485760,
"fileWriteLineLimit": 10000,
"defaultShell": "$SHELL",
"enableShell": true,
"telemetryEnabled": false,
"telemetryLogFile": "$HOME/.gpt-desktop-commander/telemetry.jsonl",
"telemetryMaxEvents": 200,
"processTtlMinutes": 480,
"onboarding": {
"enabled": true
},
"memoryMcp": {
"enabled": false,
"url": "",
"defaultType": "note",
"defaultScope": "",
"defaultTags": [],
"defaultTtlDays": 0
},
"ideBridge": {
"enabled": false,
"host": "127.0.0.1",
"port": 7311,
"token": "<generated>",
"requestTimeoutMs": 15000,
"originAllowlist": []
}
}Set GPT_DC_CONFIG to point at a specific config file. The config is loaded at server start, so restart the server to apply changes.
When onboarding.enabled is true, all tool calls are gated until onboarding completes. The tool list is still visible so the model can discover commander_onboard.
Onboarding steps:
commander_onboardtools/listlist_directorystart_searchcommander_finish
This ensures the model has proven it can execute tools correctly before getting full capabilities.
The IDE bridge exposes LSP-grade context: diagnostics, symbols, references, workspace edits, and unsaved buffers.
- Enable in config:
{
"ideBridge": {
"enabled": true,
"host": "127.0.0.1",
"port": 7311,
"token": "YOUR_TOKEN"
}
}- Install the VS Code extension from the
vscode-extension/folder. - Set VS Code settings:
gptDesktopCommander.bridgeUrl = "ws://127.0.0.1:7311"
gptDesktopCommander.bridgeToken = "YOUR_TOKEN"
- Use
ide_list_workspaces()to get theworkspaceId, then call:
ide_get_diagnosticside_symbol_searchide_find_referenceside_read_bufferide_apply_workspace_edit
The project_info tool detects project language and tooling. Primitives use the best available commands:
Detection:
package.json, lockfiles for npm/yarn/pnpmtsconfig*.jsonfor typecheck Commands:run_tests:npm run testor localvitest/jest/mocha/avalint:npm run lintor localeslint/biomeformat:npm run formator localprettier/biometypecheck: localtsc/vue-tsc --noEmitdev_server_start:npm run devornpm run start
Detection:
pyproject.toml,requirements.txt,setup.pyuv,poetry, orpipCommands:run_tests:pytestlint:ruff check .‑format:ruff format .orblack .‑typecheck:mypy .
Set the paths and the CLI wrappers become tools:
CODEX_PATH=/opt/homebrew/bin/codex
CLAUDE_PATH=$HOME/.local/bin/claude
GEMINI_PATH=/opt/homebrew/bin/gemini
GEMINI_WORKDIR=$HOME/gemini-workTools:
codex_run(args?, cwd?, env?, timeoutMs?, maxBufferBytes?, input?)claude_run(args?, cwd?, env?, timeoutMs?, maxBufferBytes?, input?)gemini_run(args?, cwd?, env?, timeoutMs?, maxBufferBytes?, input?)
memory_note writes to a separate Memory MCP (e.g. memory.write).
Config:
{
"memoryMcp": {
"enabled": true,
"url": "https://memory.example/mcp",
"defaultType": "note"
}
}Tool:
memory_note { "content": "...", "type": "note", "tags": ["ops"] }
When enabled, get_metrics returns totals and recent events for:
- tool calls
- command exits
- durations
- error counts
Optional JSONL logging is written to telemetryLogFile.
Core tools:
commander_onboardcommander_finishget_metricsmemory_notelist_directory,read_file,write_file,get_file_infostart_process,interact_with_process,read_process_output,list_sessions,kill_processstart_search(ripgrep)edit_block
IDE tools:
ide_list_workspaces,ide_get_diagnostics,ide_symbol_search,ide_find_referenceside_read_buffer,ide_apply_workspace_edit
Primitives:
project_inforun_tests,run_unit_tests,run_integration_testslint,format,typecheck,builddev_server_start,dev_server_stop
Dev helpers:
git_status,git_diff,git_commit_preparesearch_code,find_symbol,open_file_at_line
Server:
GPT_DC_CONFIG(config path override)GPT_DC_HOST,GPT_DC_PORT(SSE bind)GPT_DC_TAILNET_HOST(allowed host list)GPT_DC_ABSOLUTE_ENDPOINT(absolute base for SSE rewrite)GPT_DC_TLS_KEY,GPT_DC_TLS_CERT(optional HTTPS)
CLI wrappers:
CODEX_PATH,CLAUDE_PATH,GEMINI_PATH,GEMINI_WORKDIR
- Path allowlist: every FS op is resolved inside
allowedDirectoriesusing realpath. - Command blocklist: hard-fails dangerous commands before spawn.
- Read/write limits: prevents giant file operations.
- Binary detection: rejects non-text reads by default.
- Process TTL: prevents zombie processes from lingering.
- Tool not found: ensure you are calling tools by name via
tools/call, not/Commander/link_*. - Tool list stale: start a new chat to refresh the tool registry.
- CLI tool missing: verify the
*_PATHenv var is set before server start. - IDE bridge offline: confirm VS Code extension is running and token matches.
- SSE origin mismatch: set
GPT_DC_ABSOLUTE_ENDPOINTfor proxy/tailnet URLs.
- Append-only audit log
- Rich search (glob + ignore support)
- Safer structured edits (range-based)
- Per-tool rate limits
PolyForm Noncommercial 1.0.0. See LICENSE.md.