⏳
Loading cheatsheet...
Model Context Protocol architecture, transports, tools, resources, prompts, and host-client-server integration patterns.
| Role | Description | Examples |
|---|---|---|
| Host | The LLM application that the user interacts with | Claude Desktop, IDE, custom Next.js app, Cursor |
| Client | Runs inside the host, maintains 1:1 connection with a server | One client per MCP server connection |
| Server | Provides capabilities (tools, resources, prompts) to the client | File system server, GitHub server, DB server |
| Step | Direction | Action |
|---|---|---|
| 1 | Host ↔ Client | Host launches client for each MCP server |
| 2 | Client → Server | Initialize handshake (capabilities exchange) |
| 3 | Server → Client | Server announces available tools/resources/prompts |
| 4 | Client → Server | Client requests tool execution or data |
| 5 | Server → Client | Server returns results |
| 6 | Client → Host | Results forwarded to LLM for reasoning |
| Transport | Protocol | Direction | Use Case | Status |
|---|---|---|---|---|
| stdio | stdin/stdout | Bidirectional (pipe) | CLI tools, local dev, Claude Desktop config | Stable |
| SSE | HTTP + Server-Sent Events | Server push, client HTTP POST | Web-based, legacy remote servers | Deprecated (2025) |
| Streamable HTTP | HTTP POST + optional SSE upgrade | Bidirectional via HTTP | Production deployments, remote servers | Recommended ★ |
| Feature | stdio | SSE | Streamable HTTP |
|---|---|---|---|
| Connection | Process pipe | Long-lived HTTP GET + POST | Stateless HTTP POST per message |
| Session management | Process lifetime | Server-managed session | Optional session via SSE upgrade |
| Stateless? | No (persistent process) | No (session-based) | Yes (per-request, unless upgraded) |
| Streaming | Natural (stdout) | Native SSE streaming | Optional SSE upgrade for streaming |
| Firewall friendly | Local only | Requires open port | Standard HTTPS (443) |
| Restart resilience | Auto (respawn process) | Reconnect logic needed | Inherent (stateless) |
| Recommended for | Local dev & CLI | Legacy only | All production deployments |
// Claude Desktop — stdio transport (local MCP server)
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-filesystem", "/Users/me/projects"]
},
"github": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-github"],
"env": { "GITHUB_TOKEN": "ghp_xxx" }
}
}
}{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name (e.g. 'San Francisco')"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"default": "celsius"
}
},
"required": ["city"]
}
}| Aspect | Tools | Resources |
|---|---|---|
| Initiated by | LLM decides to invoke | Client explicitly requests |
| Side effects | Can modify state (write DB, call API) | Read-only by convention |
| Parameters | Arbitrary JSON input | URI + optional query params |
| Caching | Not typically cached | Can be cached (keyed by URI) |
| Subscriptions | No | Yes (real-time updates) |
| Best for | Actions, computations, mutations | Static data, files, records |
| Step | Message | Direction | Details |
|---|---|---|---|
| 1 | initialize (request) | Client → Server | Client sends capabilities + client info |
| 2 | initialize (response) | Server → Client | Server responds with capabilities + server info |
| 3 | notifications/initialized | Client → Server | Client confirms init is complete |
| Capability | Who Declares | Purpose |
|---|---|---|
| tools | Server | Server can list/call tools |
| resources | Server | Server can list/read resources + subscriptions |
| prompts | Server | Server can list/get prompt templates |
| logging | Server | Server can send log messages to client |
| sampling | Client | Client can provide LLM completions to server |
| elicitation | Server | Server can request structured input from user (2025) |
// Step 1: Client sends initialize
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-03-26",
"capabilities": {
"sampling": {},
"roots": { "listChanged": true }
},
"clientInfo": {
"name": "my-claude-app",
"version": "1.0.0"
}
}
}
// Step 2: Server responds
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-03-26",
"capabilities": {
"tools": { "listChanged": true },
"resources": { "subscribe": true },
"prompts": { "listChanged": true },
"logging": {}
},
"serverInfo": {
"name": "my-mcp-server",
"version": "0.1.0"
}
}
}
// Step 3: Client sends initialized notification (no id = no response expected)
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}notifications/initialized message. Notifications (no id field) never expect a response — this is a core JSON-RPC 2.0 rule.| Type | Has "id" ? | Expects Response? | Use Case |
|---|---|---|---|
| Request | Yes (number or string) | Yes (result or error) | Tool calls, list_tools, initialize |
| Notification | No | No | initialized, progress, log messages |
| Response (result) | Yes (matches request id) | N/A | Successful result from server |
| Response (error) | Yes (matches request id) | N/A | Error with code and message |
| Code | Name | Meaning |
|---|---|---|
| -32700 | Parse error | Invalid JSON was received |
| -32600 | Invalid Request | JSON sent is not a valid request object |
| -32601 | Method not found | The method does not exist |
| -32602 | Invalid params | Invalid method parameters |
| -32603 | Internal error | Internal JSON-RPC error |
// ── Request (expects response) ──
{ "jsonrpc": "2.0", "id": 1, "method": "tools/list" }
// ── Notification (no response expected) ──
{ "jsonrpc": "2.0", "method": "notifications/initialized" }
// ── Success Response ──
{
"jsonrpc": "2.0", "id": 1,
"result": {
"tools": [
{ "name": "get_weather", "description": "Get weather", "inputSchema": { ... } }
]
}
}
// ── Error Response ──
{
"jsonrpc": "2.0", "id": 2,
"error": { "code": -32601, "message": "Method not found" }
}id and expect a response. Notifications have no id and are fire-and-forget. Use notifications for progress updates, logging, and lifecycle events.# ── Basic MCP Server (stdio transport) ──
from mcp.server import Server
from mcp.server.stdio import stdio_server
import mcp.types as types
import asyncio
# Create server instance
server = Server("weather-server")
@server.list_tools()
async def list_tools() -> list[types.Tool]:
"""Declare available tools."""
return [
types.Tool(
name="get_weather",
description="Get current weather for a city",
inputSchema={
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
"units": {"type": "string", "enum": ["celsius", "fahrenheit"],
"default": "celsius"}
},
"required": ["city"]
}
),
types.Tool(
name="get_forecast",
description="Get 5-day weather forecast",
inputSchema={
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
)
]
@server.call_tool()
async def call_tool(
name: str,
arguments: dict
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
"""Handle tool invocations."""
if name == "get_weather":
city = arguments["city"]
units = arguments.get("units", "celsius")
# Your actual API call here
weather_data = fetch_weather(city, units)
return [types.TextContent(type="text", text=str(weather_data))]
elif name == "get_forecast":
city = arguments["city"]
forecast = fetch_forecast(city)
return [types.TextContent(type="text", text=str(forecast))]
raise ValueError(f"Unknown tool: {name}")
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())
if __name__ == "__main__":
asyncio.run(main())# ── MCP Server with Streamable HTTP Transport ──
from mcp.server import Server
from mcp.server.streamable_http import streamablehttp_server
import mcp.types as types
server = Server("my-http-server")
@server.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="search",
description="Search the knowledge base",
inputSchema={
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "search":
results = search_knowledge_base(arguments["query"])
return [types.TextContent(type="text", text=results)]
raise ValueError(f"Unknown tool: {name}")
# Run with Streamable HTTP (production-ready)
import uvicorn
from starlette.applications import Starlette
from starlette.routing import Mount
app = Starlette(
routes=[Mount("/mcp", app=streamablehttp_server(server))]
)
uvicorn.run(app, host="0.0.0.0", port=8000)# ── MCP Client (stdio transport) ──
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import asyncio
async def main():
# Define server parameters
server_params = StdioServerParameters(
command="python",
args=["weather_server.py"],
env={"API_KEY": "your-api-key"}
)
# Connect and interact
async with stdio_client(server_params) as (read_stream, write_stream):
async with ClientSession(read_stream, write_stream) as session:
# Step 1: Initialize
await session.initialize()
print("✓ Connected to MCP server")
# Step 2: List available tools
tools_result = await session.list_tools()
for tool in tools_result.tools:
print(f" Tool: {tool.name} — {tool.description}")
# Step 3: Call a tool
result = await session.call_tool(
"get_weather",
arguments={"city": "San Francisco", "units": "fahrenheit"}
)
print(f" Result: {result.content[0].text}")
asyncio.run(main())# ── MCP Client (Streamable HTTP transport) ──
from mcp.client.streamable_http import streamablehttp_client
from mcp import ClientSession
import asyncio
async def main():
# Connect via HTTP
async with streamablehttp_client("http://localhost:8000/mcp") as (
read_stream, write_stream, _
):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
# List tools
tools = await session.list_tools()
print(f"Available tools: {[t.name for t in tools.tools]}")
# Call tool
result = await session.call_tool(
"search",
arguments={"query": "machine learning"}
)
for content in result.content:
print(content.text)
# List resources
resources = await session.list_resources()
for res in resources.resources:
print(f"Resource: {res.uri} — {res.name}")
asyncio.run(main())session.initialize() first. No other methods will work until the initialization handshake completes. Use async with context managers to ensure proper cleanup of streams and sessions.| Type | Class | Use Case |
|---|---|---|
| Text | types.TextContent | Plain text, JSON strings, error messages |
| Image | types.ImageContent | Base64-encoded images (PNG, JPEG) |
| Resource | types.EmbeddedResource | Embedded MCP resource references |
| Annotation | Purpose | Effect |
|---|---|---|
| readOnlyHint | Tool does not modify any state | Client can show read-only indicator |
| destructiveHint | Tool permanently deletes/modifies data | Client can require confirmation |
| idempotentHint | Calling multiple times has same effect | Client can auto-retry |
| openWorldHint | Tool accesses external network/services | Client warns about external access |
# ── Advanced Tool with Annotations, Errors, and Progress ──
from mcp.server import Server
import mcp.types as types
server = Server("advanced-server")
@server.list_tools()
async def list_tools():
return [
types.Tool(
name="delete_record",
description="Delete a record from the database (irreversible)",
inputSchema={
"type": "object",
"properties": {
"table": {"type": "string"},
"id": {"type": "integer"}
},
"required": ["table", "id"]
},
annotations=types.ToolAnnotations(
destructiveHint=True, # Warn: destructive operation
idempotentHint=False, # Not idempotent
readOnlyHint=False, # Modifies state
openWorldHint=False # No external access
)
),
types.Tool(
name="search_web",
description="Search the web for information",
inputSchema={
"type": "object",
"properties": {
"query": {"type": "string"}
},
"required": ["query"]
},
annotations=types.ToolAnnotations(
readOnlyHint=True,
openWorldHint=True # Accesses external network
)
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "delete_record":
# Validate before destructive action
table = arguments["table"]
record_id = arguments["id"]
# Return structured error if validation fails
if not table.isidentifier():
return [types.TextContent(
type="text",
text=json.dumps({"error": "Invalid table name", "code": "VALIDATION_ERROR"})
)]
# Perform deletion
await db.execute(f"DELETE FROM {table} WHERE id = {record_id}")
return [types.TextContent(
type="text",
text=json.dumps({"success": True, "deleted_id": record_id})
)]
raise ValueError(f"Unknown tool: {name}")| Method | Description | Returns |
|---|---|---|
| resources/list | List all available resources | List of Resource objects with URIs |
| resources/read | Read a specific resource by URI | Text or blob content |
| resources/templates/list | List URI templates with params | Template objects |
| resources/templates/read | List resources matching template | Matching resources |
| resources/subscribe | Subscribe to resource changes | Confirmation |
| resources/unsubscribe | Unsubscribe from updates | Confirmation |
| Scheme | Example | Use Case |
|---|---|---|
| file:// | file:///home/user/doc.md | Local files |
| db:// | db://mydb/users/123 | Database records |
| http(s):// | https://api.example.com/v1 | Remote APIs |
| custom:// | custom://internal/report | Application-specific data |
# ── Server with Resources & Subscriptions ──
from mcp.server import Server
import mcp.types as types
server = Server("resource-server")
@server.list_resources()
async def list_resources():
return [
types.Resource(
uri="file:///app/logs/app.log",
name="Application Logs",
description="Recent application log entries",
mimeType="text/plain"
),
types.Resource(
uri="file:///app/config/settings.json",
name="Settings",
description="Application configuration",
mimeType="application/json"
),
]
@server.read_resource()
async def read_resource(uri: str):
if uri == "file:///app/logs/app.log":
content = read_log_file()
return [types.TextResourceContents(
uri=uri,
mimeType="text/plain",
text=content
)]
elif uri == "file:///app/config/settings.json":
config = load_config()
return [types.TextResourceContents(
uri=uri,
mimeType="application/json",
text=json.dumps(config, indent=2)
)]
raise ValueError(f"Unknown resource: {uri}")
# ── Client: Reading Resources ──
# resources = await session.list_resources()
# content = await session.read_resource("file:///app/config/settings.json")| Method | Description |
|---|---|
| prompts/list | List all available prompt templates |
| prompts/get | Get a specific prompt with arguments filled in |
| Field | Type | Description |
|---|---|---|
| name | string | Unique identifier for the prompt template |
| description | string | Human-readable description |
| arguments | array | List of expected arguments with names & descriptions |
| messages | array | Resulting conversation messages (role + content) |
# ── Server with Prompt Templates ──
from mcp.server import Server
import mcp.types as types
server = Server("prompt-server")
@server.list_prompts()
async def list_prompts():
return [
types.Prompt(
name="code_review",
description="Generate a code review for the given code",
arguments=[
types.PromptArgument(
name="code",
description="The code to review",
required=True
),
types.PromptArgument(
name="language",
description="Programming language",
required=False
)
]
),
types.Prompt(
name="debug_error",
description="Help debug an error message",
arguments=[
types.PromptArgument(
name="error_message",
description="The error message to debug",
required=True
),
types.PromptArgument(
name="stack_trace",
description="Optional stack trace",
required=False
)
]
)
]
@server.get_prompt()
async def get_prompt(name: str, arguments: dict | None = None):
args = arguments or {}
if name == "code_review":
code = args.get("code", "")
lang = args.get("language", "unknown")
return types.GetPromptResult(
messages=[
types.PromptMessage(
role="user",
content=types.TextContent(
type="text",
text=f"Please review the following {lang} code:\n\n"
f"--- CODE START ---\n{code}\n--- CODE END ---\n\n"
f"Focus on: bugs, performance, style, and best practices."
)
)
]
)
raise ValueError(f"Unknown prompt: {name}")
# ── Client: Get Prompt ──
# result = await session.get_prompt("code_review", {"code": "def add(a,b): return a+b", "language": "python"})
# for msg in result.messages:
# print(f"[{msg.role}]: {msg.content.text}")| Feature | Type | Direction | Purpose |
|---|---|---|---|
| Logging | Notification | Server → Client | Server sends debug/info/warning/error messages |
| Progress | Notification | Server → Client | Long-running operation progress (progressToken) |
| Cancellation | Request | Client → Server | Cancel an in-flight request (method: $/cancel) |
# ── Server: Sampling & Logging ──
from mcp.server import Server
import mcp.types as types
server = Server("sampling-server")
# Server sends log messages to client
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "analyze_data":
# Send progress notifications
await server.request_context.meta.send_progress(
progress_token=arguments.get("_meta", {}).get("progressToken"),
progress=1, total=3, message="Loading data..."
)
data = load_data()
await server.request_context.meta.send_progress(
progress_token=arguments.get("_meta", {}).get("progressToken"),
progress=2, total=3, message="Analyzing..."
)
# Request LLM sampling for analysis
if server.request_context.meta.sampling:
result = await server.request_context.meta.send_sampling(
messages=[
types.SamplingMessage(
role="user",
content=types.TextContent(
type="text",
text=f"Analyze this data and provide insights:\n{data}"
)
)
],
max_tokens=500
)
analysis = result.content.text
else:
analysis = "Sampling not available, using basic analysis."
await server.request_context.meta.send_progress(
progress_token=arguments.get("_meta", {}).get("progressToken"),
progress=3, total=3, message="Done!"
)
return [types.TextContent(type="text", text=analysis)]
raise ValueError(f"Unknown tool: {name}")| Feature | Description | Spec Version |
|---|---|---|
| OAuth 2.1 | Full OAuth 2.1 support for remote server authentication | 2025-03-26 |
| Scoped permissions | Granular permission scoping per tool/resource | 2025-03-26 |
| Token-based auth | Bearer tokens in Streamable HTTP headers | 2025-03-26 |
| Sandboxed execution | Restricted code execution environments | 2025-06-18 |
| Elicitation | Server requests structured input from user (not LLM) | 2025-06-18 |
| Structured output | Tools declare output schema for validation | 2025-03-26 |
| Step | Action | Details |
|---|---|---|
| 1 | Client discovers OAuth metadata | GET /.well-known/oauth-authorization-server |
| 2 | User redirected to authorization server | Browser-based consent screen |
| 3 | User grants permissions | Scopes map to MCP tool/resource access |
| 4 | Auth server returns code | Standard OAuth authorization code |
| 5 | Client exchanges code for tokens | POST to token endpoint |
| 6 | Client includes token in requests | Authorization: Bearer <token> header |
| Practice | Why |
|---|---|
| Use Streamable HTTP over HTTPS | Encrypts all MCP traffic in transit |
| Validate all tool inputs | Prevent injection attacks |
| Use tool annotations | readOnlyHint, destructiveHint help clients enforce safety |
| Implement rate limiting | Prevent abuse and DoS |
| Scope OAuth tokens | Minimum permissions per use case |
| Sandbox code execution | Isolate untrusted code from host system |
| Audit logging | Track all tool invocations for compliance |
| Transport | Security Model | Considerations |
|---|---|---|
| stdio | Local process (trusted) | Process runs as local user, no network exposure |
| SSE | HTTP-based (can use TLS) | Must enforce HTTPS in production |
| Streamable HTTP | HTTPS recommended | Supports OAuth 2.1 bearer tokens natively |
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-filesystem", "/path/to/dir"],
"env": {}
},
"github": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-github"],
"env": { "GITHUB_TOKEN": "ghp_xxx" }
},
"postgres": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-postgres",
"postgresql://user:pass@localhost/db"],
"env": {}
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-puppeteer"],
"env": {}
},
"brave-search": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-brave-search"],
"env": { "BRAVE_API_KEY": "bsk_xxx" }
},
"custom-python": {
"command": "python",
"args": ["/absolute/path/to/server.py"],
"env": { "API_KEY": "xxx", "DEBUG": "true" }
}
}
}| Host | Transport | Notes |
|---|---|---|
| Claude Desktop | stdio | Config file: ~/Library/Application Support/Claude/claude_desktop_config.json |
| Cursor | stdio + HTTP | Built-in MCP support in settings |
| VS Code (Copilot) | stdio | Via GitHub Copilot extension MCP support |
| Windsurf | stdio | Built-in MCP server configuration |
| Zed | stdio | MCP support in settings.json |
| Continue.dev | stdio | Open-source AI code assistant |
| Custom app | stdio or HTTP | Use the MCP client SDKs |
| Server | Package | Provides |
|---|---|---|
| Filesystem | @anthropic/mcp-filesystem | Read/write local files |
| GitHub | @anthropic/mcp-github | Repos, PRs, issues, code search |
| PostgreSQL | @anthropic/mcp-postgres | Query PostgreSQL databases |
| Puppeteer | @anthropic/mcp-puppeteer | Browser automation |
| Brave Search | @anthropic/mcp-brave-search | Web search via Brave API |
| Slack | @anthropic/mcp-slack | Read/write Slack messages |
| Google Drive | @anthropic/mcp-gdrive | Access Google Drive files |
| Memory | @anthropic/mcp-memory | Persistent knowledge graph |
| Sentry | @anthropic/mcp-sentry | Error tracking & diagnostics |
| Sequential Thinking | @anthropic/mcp-sequential-thinking | Dynamic reasoning chains |
| OS | Claude Desktop Config Path |
|---|---|
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
| Linux | ~/.config/Claude/claude_desktop_config.json |
# ── Production MCP Server with Error Handling ──
from mcp.server import Server
from mcp.server.streamable_http import streamablehttp_server
import mcp.types as types
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("mcp-server")
server = Server("production-server")
# Graceful error handling for all tools
@server.call_tool()
async def call_tool(name: str, arguments: dict):
try:
if name == "query_database":
# Validate inputs
query = arguments.get("query", "")
if not query.strip():
return [types.TextContent(
type="text",
text='{"error": "Query cannot be empty"}'
)]
# Execute with timeout
result = await asyncio.wait_for(
execute_query(query), timeout=30.0
)
return [types.TextContent(type="text", text=json.dumps(result))]
return [types.TextContent(
type="text",
text=f'{{"error": "Unknown tool: {name}"}}'
)]
except asyncio.TimeoutError:
logger.warning(f"Tool {name} timed out")
return [types.TextContent(
type="text",
text='{"error": "Request timed out after 30s"}'
)]
except Exception as e:
logger.error(f"Tool {name} failed: {e}")
return [types.TextContent(
type="text",
text=f'{{"error": "Internal server error"}}'
)]# ── Dockerfile for MCP Server ──
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py .
EXPOSE 8000
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000"]| Feature | MCP | REST API |
|---|---|---|
| Purpose | LLM-to-tool communication | General-purpose web services |
| Protocol | JSON-RPC 2.0 | HTTP (GET, POST, PUT, DELETE) |
| Schema | JSON Schema (tools) | OpenAPI / Swagger |
| Discovery | Dynamic (list_tools, list_resources) | Static (documentation / OpenAPI spec) |
| Streaming | Native (SSE upgrade) | Separate mechanism (SSE, WebSockets) |
| Caching | Built-in resource caching | Application-managed (ETags, Cache-Control) |
| Subscriptions | Native (resource subscriptions) | WebHooks / WebSockets |
| Auth | OAuth 2.1 (built-in) | Varies (API keys, OAuth, JWT) |
| Bi-directional | Yes (sampling, logging, progress) | No (request-response only) |
| Client types | LLM hosts only | Any HTTP client |
| Versioning | Protocol version negotiation | URL / header versioning |
| State management | Session-based | Stateless (REST) or session-based |
| Human-friendly | Designed for AI agents | Designed for developers |
| Transport | stdio, SSE, Streamable HTTP | HTTP/HTTPS |
| Practice | Details |
|---|---|
| Use Streamable HTTP in production | Stateless, firewall-friendly, supports OAuth |
| Use structured tool schemas | JSON Schema with descriptions for every parameter |
| Validate all inputs | Never trust LLM-provided arguments blindly |
| Implement pagination | For resources that can return large lists |
| Use tool annotations | readOnlyHint, destructiveHint help clients make safety decisions |
| Handle timeouts | Set reasonable timeouts for external API calls |
| Return structured errors | JSON error objects with error codes and human messages |
| Log important events | Use logging notifications for debugging and audit trails |
| Use sampling wisely | For complex multi-step reasoning, not simple lookups |
| Version your servers | Include version in serverInfo for compatibility tracking |
| Keep tools focused | One tool = one clear action (single responsibility) |
| Write clear descriptions | LLMs use tool descriptions to decide when to invoke them |
| Anti-pattern | Why it is bad | Fix |
|---|---|---|
| God tool (one tool does everything) | LLM cannot reason about when to use it | Split into focused single-purpose tools |
| No input validation | Injection attacks, crashes | Validate all arguments with JSON Schema + runtime checks |
| Missing tool descriptions | LLM doesn not know when to invoke | Write clear, specific descriptions |
| Returning raw exceptions | Exposes internals to LLM | Catch errors and return structured JSON errors |
| Blocking I/O in async handlers | Blocks the event loop | Use asyncio.to_thread() or async libraries |
| No timeouts on external calls | Server hangs indefinitely | Always wrap with asyncio.wait_for() |
| Exposing stdio to network | Security vulnerability | Use Streamable HTTP + TLS for remote access |
| Ignoring capabilities negotiation | May use features client does not support | Check client capabilities before using optional features |
| Issue | Likely Cause | Solution |
|---|---|---|
| Server not discovered | Config path wrong or JSON syntax error | Validate config JSON, check file path |
| "Method not found" error | Server does not implement the requested method | Check server registration of handlers |
| Initialize handshake fails | Protocol version mismatch | Ensure both sides use same protocolVersion |
| Tools not showing up | list_tools handler not registered or returns wrong type | Verify @server.list_tools() decorator |
| Connection drops | Process crashed or timeout | Check server logs, add error handling |
| Claude Desktop shows no tools | Config file not loaded | Restart Claude Desktop, check config path |
| Timeout errors | External API too slow | Add asyncio.wait_for() with reasonable timeout |
| Permission denied (stdio) | Script not executable | chmod +x server.py, check shebang line |
| Technique | How To |
|---|---|
| Enable MCP inspector | npx @anthropic/mcp-inspector (interactive debugger) |
| Log all messages | Set logging to DEBUG level on both client and server |
| Test with curl (HTTP) | curl -X POST http://localhost:8000/mcp with JSON-RPC body |
| Validate JSON Schema | Use jsonschema library to test tool input schemas |
| Monitor with MCP Inspector | Real-time message viewer, tool tester, resource browser |
| Check protocol version | Ensure both client and server agree on protocolVersion |
| Use the SDK inspector | from mcp.shared.session import SessionLogger |
# ── Debug Utilities ──
# 1. Enable debug logging
import logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s [%(levelname)s] %(name)s: %(message)s'
)
# 2. Quick test: Run server manually and test with raw JSON-RPC
# Start server: python server.py
# Send via stdin (for stdio transport):
# {"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}
# {"jsonrpc":"2.0","method":"notifications/initialized"}
# {"jsonrpc":"2.0","id":2,"method":"tools/list"}
# 3. MCP Inspector (interactive debugging tool)
# $ npx @anthropic/mcp-inspector
# Opens web UI at http://localhost:6274
# - Inspect all messages in real-time
# - Test tool calls manually
# - Browse resources and prompts
# - View capabilities exchangenpx @anthropic/mcp-inspector for interactive debugging. It provides a web UI where you can see every JSON-RPC message exchanged, test tool calls, browse resources, and verify your server is working correctly before integrating with Claude Desktop or IDEs.