MCP · A2A · x402 · agentndx.ai llms.txt MCP endpoint
BrowseAI & Models → Ollama MCP
Ollama MCP
Local LLM inference via Ollama. Run Llama, Mistral, Gemma, and other models locally — no API keys, no data leaving the machine.
MCP verified
Transport stdio
Auth none
Endpoint npx ollama-mcp
Install npx ollama-mcp
01 Run LLMs locally without sending data to external APIs
02 Prototype agent workflows using open-source models
03 Switch between models for cost vs quality tradeoffs
ollama local-llm privacy llama open-source
Machine-readable: /api/servers.json  ·  JSON-LD schema embedded in <head>