Model Context Protocol (MCP) is an open standard that defines how AI agents communicate with external tools and data sources. Instead of building custom integrations for every service an AI needs to access, developers implement MCP once — and their agent can connect to any compliant server.

The short version: MCP is to AI agents what HTTP is to the web. A shared protocol that lets different systems talk to each other without custom glue code for every connection.

Why MCP Exists

Before MCP, connecting an AI agent to an external service meant writing a custom integration: a specific function, a hardcoded API call, a wrapper that translated between the model’s output and the service’s input. Multiply that by every tool your agent needs — databases, APIs, file systems, third-party services — and the integration surface becomes unmanageable.

MCP solves this by defining a standard interface. An MCP server exposes capabilities (tools, resources, prompts) through a consistent protocol. An MCP client — which is the AI agent or the host application — connects to those servers and uses their capabilities without knowing anything about the underlying implementation.

How MCP Works

An MCP server exposes three types of capabilities:

Tools are functions the AI can call. A database MCP server might expose query_table, insert_record, list_tables. The AI decides when and how to use them based on the task.

Resources are data the AI can read. A file system server might expose the contents of files or directories. A CRM server might expose customer records.

Prompts are reusable prompt templates. Servers can provide structured prompts that help the AI interact with their service correctly.

The AI agent (running inside a host application like Claude, Cursor, or a custom system) connects to one or more MCP servers. The host application manages the connections. The agent uses the exposed capabilities as part of its reasoning and task execution.

The Transport Layer

MCP servers communicate over one of two transports:

STDIO runs the server as a local process. The host application spawns it and communicates via standard input/output. This is the standard for local development tools — servers that run on the same machine as the client.

Streamable HTTP is the production transport. The server runs as a web service. Clients connect over HTTP. This is the right choice for any server that needs to be accessible remotely, handle multiple concurrent connections, or run in a deployed environment.

As of April 2026, Streamable HTTP has replaced SSE (Server-Sent Events) as the recommended remote transport. If you’re building a new MCP server for production, use Streamable HTTP.

Authentication

MCP servers that require authorization use OAuth 2.1. The 2025-03-26 version of the spec formalized this, replacing earlier ad-hoc approaches. If your server accesses user-specific data or paid APIs, you implement OAuth 2.1 and handle the token exchange through the MCP authorization flow.

Who Supports MCP

MCP was introduced by Anthropic in November 2024. As of 2026, it has broad support:

  • Anthropic: Claude, Claude Code, Claude Desktop — full MCP support
  • OpenAI: ChatGPT, Agents SDK — MCP support added March 2025
  • Google: Gemini, Google AI Studio — MCP support added April 2025
  • Microsoft: Azure AI, GitHub Copilot — MCP support added 2025
  • Developer tools: Cursor, Continue, Zed, VS Code — native MCP support
  • Open source: LangChain, LlamaIndex, AutoGen — MCP integrations available

The ecosystem has also grown from the community: over 13,000 public MCP servers on GitHub as of early 2026, spanning databases, APIs, developer tools, productivity apps, and infrastructure services.

MCP vs. Function Calling

Function calling (also called tool use) is a feature that lets AI models invoke predefined functions during a conversation. It’s model-specific — each provider implements it differently, and you define the functions inline in your application code.

MCP is a standard that runs on top of or alongside function calling. Instead of defining tools inline, you connect to a server that exposes them. The key difference:

Function CallingMCP
Where tools are definedIn your application codeIn an external server
PortabilityModel-specificWorks with any MCP-compatible client
ReusabilityPer-applicationShare servers across applications
MaintenanceYour responsibilityServer maintainer’s responsibility

For simple, one-off integrations, function calling is fine. For anything you want to reuse across agents, share with other developers, or maintain independently of your application, MCP is the right choice.

Finding MCP Servers

AgentNDX indexes MCP servers across categories: databases, developer tools, APIs, file systems, communication, observability, security, and more. Browse the full registry at agentndx.ai/browse or search for specific capabilities through the API.

The registry includes server metadata, documentation links, GitHub stars, transport type, and whether the server supports x402 payments for agent-to-agent commerce.