What is MCP? The Model Context Protocol, Explained
A developer's introduction to the Model Context Protocol — what it is, why Anthropic built it, how it differs from tool-use APIs, and how to wire your first MCP server in under 20 minutes.
The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools, data sources, and environments. Anthropic published the spec in late 2024, and adoption across Claude Desktop, Cursor, Zed, Windsurf, and a growing set of open-source clients has been fast. If you’re building with LLMs in 2025, you’ll hit MCP within a month.
This post is the short version: what it is, why it matters, and how to write a tiny MCP server yourself.
The problem MCP solves
Every serious LLM integration eventually needs the same three categories of plumbing:
- Resources — static-ish context the model can read (files, documents, database rows).
- Tools — actions the model can call (send an email, run a query, commit a file).
- Prompts — reusable prompt templates that take arguments.
Before MCP, every app reinvented this. Claude Desktop had its own file-access API, Cursor had a different one, and a third tool had its own JSON schema. An MCP server you build once works across every MCP-capable client. That’s the whole pitch.
How it’s different from tool-use APIs
Tool-use (function-calling) in the Claude or OpenAI APIs is a wire format: the model emits a JSON tool call, you execute it, you return the result. MCP is a layer above that — it’s a client/server protocol over JSON-RPC that lets an MCP client discover tools at runtime from any MCP server.
- Tool-use API: the developer hard-codes the list of tools in the prompt.
- MCP: the client asks the server “what tools do you expose?” and the server answers.
That indirection matters. It means the user can plug a new server into their Claude Desktop install and new capabilities appear without any app update. It also means a tool author can build once and ship to every client that speaks MCP.
The architecture in one picture
┌────────────────┐ stdio / HTTP ┌─────────────────┐
│ MCP Client │ ◄──── JSON-RPC ────► │ MCP Server │
│ (Claude │ │ (your code) │
│ Desktop, │ │ │
│ Cursor, …) │ │ exposes: │
│ │ │ - resources │
│ calls LLM │ │ - tools │
│ with tool │ │ - prompts │
│ list from │ │ │
│ servers │ │ │
└────────────────┘ └─────────────────┘
The client hosts the LLM conversation and fans out to one or more servers. The server implements three endpoints (list_tools, list_resources, list_prompts) plus the call handlers. Transport is usually stdio (for local subprocesses) or HTTP/SSE (for remote).
A minimal MCP server in Python
The mcp Python SDK is the easiest way to start. Install it, then drop this into server.py:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("weather")
@mcp.tool()
def get_weather(city: str) -> str:
"""Look up the current weather for a city."""
# Fake data for the example
return f"The weather in {city} is 72°F and sunny."
if __name__ == "__main__":
mcp.run()
That’s a real MCP server. The @mcp.tool() decorator registers the function, the docstring becomes the tool description, and the type annotations drive the JSON Schema that clients discover.
Register it with Claude Desktop by adding this block to your claude_desktop_config.json:
{
"mcpServers": {
"weather": {
"command": "python",
"args": ["/absolute/path/to/server.py"]
}
}
}
Restart Claude Desktop, and the weather tool is live. Ask “what’s the weather in Tokyo?” and the model calls get_weather("Tokyo").
A TypeScript version
If you prefer Node, the @modelcontextprotocol/sdk package gives you the same primitives:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new McpServer({ name: "weather", version: "0.1.0" });
server.tool(
"get_weather",
{ city: { type: "string" } },
async ({ city }) => ({
content: [{ type: "text", text: `Weather in ${city}: 72°F, sunny.` }],
})
);
await server.connect(new StdioServerTransport());
Same pattern: declare a tool, describe its inputs, return structured content.
Resources vs. tools — when to use which
A tool is an action the model decides to invoke. A resource is passive context the user or client decides to load. Two heuristics:
- If the call has side effects (writes a file, sends a message, charges a card), it’s a tool.
- If the call returns information that could have been loaded at the start of the conversation, it’s probably a resource.
A read-only database query can be either — as a resource, the client might auto-load it into context; as a tool, the model decides when to call it. Most database integrations end up exposing both.
The three things that trip people up
1. Config file location
Claude Desktop’s config path is OS-specific (~/Library/Application Support/Claude/claude_desktop_config.json on macOS, %APPDATA%\Claude\claude_desktop_config.json on Windows). Cursor has its own. Most “why isn’t my server showing up” questions turn out to be wrong config path.
2. Absolute paths
The server command runs with whatever working directory the client launched in. Always use absolute paths in the command and args entries. python server.py will usually fail silently. /usr/local/bin/python /full/path/to/server.py works.
3. Schema strictness
Clients call list_tools once on startup and cache the schema. If your tool’s JSON Schema is invalid, the tool disappears from the menu with no error message. When a tool isn’t showing up and the server is clearly running, run the server’s list_tools output through a JSON Schema validator first.
The MCP ecosystem in late 2025
- Clients: Claude Desktop, Cursor, Zed, Windsurf, Continue.dev, Cline, and a growing list of IDE/editor integrations.
- Hosted servers: GitHub, Linear, Notion, PostgreSQL, Slack, and dozens of community servers for everything from Figma to Kubernetes.
- Registries:
modelcontextprotocol.iolists official servers; community registries index hundreds more.
The list grows weekly. If you’re integrating with any developer tool, there’s a good chance either the vendor or the community has already built the MCP server.
Should you write your own?
Yes, if any of these apply:
- You have an internal tool or dataset only your team can access.
- An existing third-party server is close but missing the one operation you need.
- You want the LLM to automate an internal workflow that currently lives in a shell script.
A custom MCP server is typically 50–200 lines of code, and the decorator-based SDKs make the boilerplate trivial.
Getting the config right
The piece most people get wrong isn’t the server code — it’s the JSON config that wires the server into the client. Slightly different shapes for Claude Desktop vs. Cursor, absolute paths required, environment variables formatted just so.
Our MCP Config Generator takes a one-line description (“an MCP server that reads my Postgres db”) and emits the full config block for your client. Not a replacement for the real install, but it skips the “why is this JSON wrong” step. Free, no sign-up.
Further reading
- The official spec:
modelcontextprotocol.io - The Python SDK:
github.com/modelcontextprotocol/python-sdk - The TypeScript SDK:
github.com/modelcontextprotocol/typescript-sdk
MCP is the rare protocol that’s simple enough to learn in an afternoon and open enough to matter. If you’re shipping LLM-integrated tools in 2026, expect it to be infrastructure.