Browse
→ AI & Models
→ Together AI MCP
Together AI MCP
Together AI inference API via MCP. Run open-source LLMs — Llama, Mistral, DBRX, and more — with fast parallel inference at scale.
MCP verified
Integration
| Transport | stdio |
| Auth | api-key |
| Endpoint | npx @togetherai/mcp-server |
| Install | npx @togetherai/mcp-server |
Use Cases
| 01 | Run Llama, Mistral, or DBRX inference in agent pipelines |
| 02 | Fine-tune open-source models on custom datasets |
| 03 | Parallel batch inference for high-throughput workflows |
Tags
together-ai llm inference open-source llama
Machine-readable: /api/servers.json
· JSON-LD schema embedded in <head>