#
ollama
Ollama local LLM runner for embeddings and inference
ProgrammingWhy Ollama Embeddings Slow on Linux VM ChromaDB Posthog
Fix slow Ollama embeddings in ChromaDB on Linux VM without CUDA: batch requests, disable Posthog telemetry, ignore benign warnings. Run fully local for privacy and speed up email embedding performance.
1 answer• 1 view
ProgrammingFix Ollama Verbose Output in OpenCode GPT-OSS TUI
Troubleshoot and fix OpenCode Ollama provider issues causing verbose, incorrect file listing with gpt-oss:20b model. Update opencode.json config, enable tools, test commands for concise outputs like hosted providers.
1 answer• 1 view