Browse
→ AI & Models
→ Ollama MCP
Ollama MCP
Local LLM inference via Ollama. Run Llama, Mistral, Gemma, and other models locally — no API keys, no data leaving the machine.
MCP verified
Integration
| Transport | stdio |
| Auth | none |
| Endpoint | npx ollama-mcp |
| Install | npx ollama-mcp |
Use Cases
| 01 | Run LLMs locally without sending data to external APIs |
| 02 | Prototype agent workflows using open-source models |
| 03 | Switch between models for cost vs quality tradeoffs |
Tags
ollama local-llm privacy llama open-source
Machine-readable: /api/servers.json
· JSON-LD schema embedded in <head>