Fix Ollama Verbose Output in OpenCode GPT-OSS TUI
Troubleshoot and fix OpenCode Ollama provider issues causing verbose, incorrect file listing with gpt-oss:20b model. Update opencode.json config, enable tools, test commands for concise outputs like hosted providers.
Why does OpenCode produce incorrect and verbose output when using the Ollama provider for listing files in a project directory, unlike the concise output from GPT-OSS? I have a project folder with a Python file and AGENTS.md. When running the ‘list all files in the current folder’ command in OpenCode TUI (version 1.0.169) installed via NPM on Windows, the Ollama provider (with model gpt-oss:20b) outputs garbled file contents and attempts to open non-existent or corrupted files like world_clock.py and start.py, including irrelevant problem descriptions about internet research. In contrast, the big-pickle provider gives a short file list. My opencode.json configuration is:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "http://somehost:11434/v1"
},
"models": {
"gpt-oss:20b": {}
}
}
}
}
I’m using the latest Ollama version. How can I troubleshoot and fix this issue to get proper file listing with Ollama?
OpenCode’s Ollama provider generates verbose, incorrect outputs like garbled file contents or references to non-existent files (e.g., world_clock.py) because local models like gpt-oss:20b often struggle with precise tool calling for commands such as listing project files, falling back to hallucinations or lengthy explanations instead. In contrast, hosted providers like big-pickle deliver concise lists by leveraging better-optimized APIs. You can fix this by tweaking your opencode.json config to enable tools and reasoning explicitly, verifying Ollama’s tool support, and testing model-specific prompts in the TUI.
Contents
- Why Ollama Causes Verbose Output in OpenCode
- Key Differences from GPT-OSS and Hosted Providers
- Troubleshoot Your Opencode.json Config
- Fix Ollama Tool Calling Issues
- Test File Listing Commands Step-by-Step
- Best Models and Workarounds for Windows
- When to Switch Providers
Why Ollama Causes Verbose Output in OpenCode
You’ve nailed the symptoms: running “list all files in the current folder” in OpenCode’s TUI (v1.0.169) spits out irrelevant details about internet research or phantom files like start.py, while your actual project—just a Python file and AGENTS.md—gets ignored. This happens because Ollama models, even capable ones like gpt-oss:20b, don’t always trigger OpenCode’s built-in tools reliably. Tools like list or repo_browser.read_file are available, but the model hallucinates verbose narratives instead of calling them cleanly.
The OpenCode GitHub issue on Ollama tool failures mirrors your setup exactly—users report aborted executions where models “attempt” tools but bail, outputting advice or garbage. On Windows via NPM, permissions and network quirks with http://somehost:11434/v1 exacerbate this, unlike Linux where file access is smoother.
Key Differences from GPT-OSS and Hosted Providers
GPT-OSS shines in concise outputs because it’s tuned for coding tasks with strong safeguard removal, but when run locally via Ollama, it loses that edge—tool calling drops off, leading to 10x more verbose responses. Hosted providers like big-pickle (your working example) use cloud APIs with enforced tool adherence, so “list files” just returns:
AGENTS.md
your_python_file.py
No fluff. Local Ollama setups, per Reddit discussions on OpenCode + Ollama, fail on Windows due to incomplete tool schemas or context limits. Your config lacks explicit tools: true and reasoning: true, which gpt-oss:20b needs to prioritize actions over chit-chat.
Troubleshoot Your Opencode.json Config
Start here—your config is close but missing model flags. Update it to:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "http://somehost:11434/v1"
},
"models": {
"gpt-oss:20b": {
"tools": true,
"reasoning": true,
"options": {
"num_ctx": 65536
}
}
}
}
}
}
Save in ~/.config/opencode/opencode.json (or %APPDATA%\opencode on Windows). Restart OpenCode TUI with npx opencode@latest. This matches fixes from GitHub issue #1068, forcing gpt-oss:20b to use tools.
Ping Ollama: curl http://somehost:11434/v1/models to confirm gpt-oss:20b lists with tool support. If not, pull a fresh quant: ollama pull gpt-oss:20b.
Fix Ollama Tool Calling Issues
Ollama tool problems stem from incomplete OpenAI-compatible endpoints. Users in issue #1034 fixed similar “unavailable tool” errors by:
- Ensuring Ollama runs with
--apiflag if needed (though latest versions default to it). - Setting
OLLAMA_HOST=somehost:11434env var before launching OpenCode. - For Windows NPM installs, run as admin—file perms block
listtool, causing fallbacks to “descriptions.”
Test tool access: In TUI, type /debug tools to see available ones (expect list, read, glob). If gpt-oss:20b ignores them, switch to tool-stronger models like qwen3:32b—add to config and ollama pull qwen3:32b. Issue #3029 shows this resolves “invalid arguments” for repo tools.
Update Ollama: ollama --version—grab latest from ollama.com if below 0.3.x.
Test File Listing Commands Step-by-Step
Boot TUI in your project dir:
cd /path/to/projectnpx opencode@latest- Select Ollama > gpt-oss:20b
- Type exactly:
@list .or/list—TUI autocomplete helps. - If verbose, prepend “Use only the list tool. Output format: file1\nfile2”:
@list . Use only the list tool. Output format: file1\nfile2
Watch logs (/logs in TUI) for tool calls. Success looks like:
Tool: list {"path": "."}
Response: AGENTS.md\nscript.py
If it still rambles, your Ollama instance lacks GPU accel—gpt-oss:20b needs 16GB VRAM for sharp reasoning. Check ollama ps.
Best Models and Workarounds for Windows
Windows NPM + Ollama hits snags like issue #729—no file access. Workarounds:
- Use WSL2: Install Ubuntu, run Ollama there, point baseURL to
localhost:11434. - Models that rock:
devstral(per issue #154) orqwen3:30bwith MoE for tools. - NPM global:
npm i -g @opencode-ai/cli@latestfor stability. - Disable safeguards if verbose: Some gpt-oss quants have them baked in.
Reddit tips suggest manual JSON edits per model—your multi-model setup will need scripting.
When to Switch Providers
If tweaks fail, mirror big-pickle: Add a hosted fallback in config:
"big-pickle": {
"npm": "@ai-sdk/anthropic",
"models": { "claude-3-5-sonnet": {} }
}
But for local opencode ollama, persistence pays—most users get concise lists post-config. Monitor OpenCode issues for Ollama Turbo support.
Sources
- https://github.com/sst/opencode/issues/1068
- https://www.reddit.com/r/ollama/comments/1o9w6zv/opencode_ollama_doesnt_work_with_local_llms_on/
- https://github.com/sst/opencode/issues/1034
- https://github.com/sst/opencode/issues/3029
- https://github.com/sst/opencode/issues/729
- https://github.com/sst/opencode/issues/154
- https://www.reddit.com/r/opencodeCLI/comments/1p9s69v/tips_for_opencode_with_ollama_and_any_model/
- https://github.com/sst/opencode/issues/2467
- https://apidog.com/blog/opencode/
Conclusion
Enable tools: true and reasoning: true in your opencode.json for gpt-oss:20b, verify Ollama tools via curl, and test @list . prompts—these steps turn verbose Ollama chaos into crisp file lists matching hosted providers. If Windows persists, WSL2 or qwen3 models deliver reliably. Dive into GitHub issues for ongoing opencode ollama wins.