Try Live
Add Docs
Rankings
Pricing
Docs
Install
Install
Docs
Pricing
More...
More...
Try Live
Rankings
Enterprise
Create API Key
Add Docs
Model Context Protocol Python SDK
https://github.com/modelcontextprotocol/python-sdk
Admin
The MCP Python SDK implements the Model Context Protocol, enabling applications to provide
...
Tokens:
56,934
Snippets:
244
Trust Score:
7.8
Update:
3 weeks ago
Context
Skills
Chat
Benchmark
79.8
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# MCP Python SDK The Model Context Protocol (MCP) Python SDK provides a complete implementation for building MCP servers and clients that enable standardized communication between LLM applications and context providers. MCP allows applications to expose data through Resources (read-only data endpoints), functionality through Tools (executable functions with side effects), and interaction patterns through Prompts (reusable templates for LLM interactions). The SDK supports multiple transport protocols including stdio, SSE (Server-Sent Events), and Streamable HTTP for flexible deployment options. The SDK offers two main approaches for building servers: the high-level `MCPServer` class with decorator-based API for rapid development, and a low-level `Server` class for fine-grained control over protocol handling. Client applications can connect to any MCP server using `ClientSession` with various transport adapters. The protocol handles connection management, message routing, capability negotiation, and lifecycle events automatically, making it easy to build robust integrations between LLMs and external data sources or services. ## MCPServer - High-Level Server Creation The `MCPServer` class provides an ergonomic interface for creating MCP servers with decorator-based registration of tools, resources, and prompts. It handles all protocol details automatically and supports multiple transport protocols. ```python from mcp.server.mcpserver import MCPServer # Create an MCP server with optional configuration mcp = MCPServer( "Demo Server", title="My Demo Server", description="A demonstration MCP server", instructions="Use tools to perform calculations", debug=True, log_level="INFO" ) # Add a tool using the decorator @mcp.tool() def add(a: int, b: int) -> int: """Add two numbers together""" return a + b # Add a dynamic resource with URI template @mcp.resource("greeting://{name}") def get_greeting(name: str) -> str: """Get a personalized greeting""" return f"Hello, {name}!" # Add a static resource @mcp.resource("config://settings") def get_settings() -> str: """Get application settings""" return '{"theme": "dark", "language": "en"}' # Add a prompt template @mcp.prompt() def greet_user(name: str, style: str = "friendly") -> str: """Generate a greeting prompt""" styles = { "friendly": "Please write a warm, friendly greeting", "formal": "Please write a formal, professional greeting", } return f"{styles.get(style, styles['friendly'])} for someone named {name}." # Run the server with streamable HTTP transport if __name__ == "__main__": mcp.run(transport="streamable-http", host="127.0.0.1", port=8000) # Or use stdio transport: mcp.run(transport="stdio") # Or use SSE transport: mcp.run(transport="sse", host="127.0.0.1", port=8000) ``` ## Tools with Structured Output Tools can return structured data using Pydantic models, TypedDict, dataclasses, or plain dictionaries. The SDK automatically generates JSON schemas and validates outputs. ```python from typing import TypedDict from pydantic import BaseModel, Field from mcp.server.mcpserver import MCPServer mcp = MCPServer("Structured Output Example") # Using Pydantic models for rich structured data class WeatherData(BaseModel): """Weather information structure.""" temperature: float = Field(description="Temperature in Celsius") humidity: float = Field(description="Humidity percentage") condition: str wind_speed: float @mcp.tool() def get_weather(city: str) -> WeatherData: """Get weather for a city - returns structured data.""" return WeatherData( temperature=22.5, humidity=45.0, condition="sunny", wind_speed=5.2, ) # Using TypedDict for simpler structures class LocationInfo(TypedDict): latitude: float longitude: float name: str @mcp.tool() def get_location(address: str) -> LocationInfo: """Get location coordinates""" return LocationInfo(latitude=51.5074, longitude=-0.1278, name="London, UK") # Using dict for flexible schemas @mcp.tool() def get_statistics(data_type: str) -> dict[str, float]: """Get various statistics""" return {"mean": 42.5, "median": 40.0, "std_dev": 5.2} # Primitive types are wrapped automatically @mcp.tool() def list_cities() -> list[str]: """Get a list of cities""" return ["London", "Paris", "Tokyo"] # Returns: {"result": ["London", "Paris", "Tokyo"]} ``` ## Context Object for Advanced Tool Features The Context object provides access to MCP capabilities like logging, progress reporting, resource reading, and session management within tool functions. ```python from mcp.server.mcpserver import Context, MCPServer mcp = MCPServer(name="Progress Example") @mcp.tool() async def long_running_task(task_name: str, ctx: Context, steps: int = 5) -> str: """Execute a task with progress updates and logging.""" # Send log messages at different levels await ctx.info(f"Starting: {task_name}") await ctx.debug("Debug information") for i in range(steps): progress = (i + 1) / steps # Report progress to the client await ctx.report_progress( progress=progress, total=1.0, message=f"Step {i + 1}/{steps}", ) await ctx.debug(f"Completed step {i + 1}") # Access request metadata request_id = ctx.request_id client_id = ctx.client_id # Read another resource # data = await ctx.read_resource("resource://data") return f"Task '{task_name}' completed (request: {request_id})" @mcp.tool() async def tool_with_logging(message: str, ctx: Context) -> str: """Demonstrate different log levels.""" await ctx.debug(f"Debug: {message}") await ctx.info(f"Info: {message}") await ctx.warning(f"Warning: {message}") await ctx.error(f"Error: {message}") return f"Logged: {message}" ``` ## Elicitation - Interactive User Input Elicitation allows tools to request additional information from users during execution, supporting both form-based data collection and URL-based flows for sensitive operations. ```python import uuid from pydantic import BaseModel, Field from mcp.server.mcpserver import Context, MCPServer from mcp.shared.exceptions import UrlElicitationRequiredError from mcp.types import ElicitRequestURLParams mcp = MCPServer(name="Elicitation Example") class BookingPreferences(BaseModel): """Schema for collecting user preferences.""" checkAlternative: bool = Field(description="Would you like to check another date?") alternativeDate: str = Field(default="2024-12-26", description="Alternative date") @mcp.tool() async def book_table(date: str, time: str, party_size: int, ctx: Context) -> str: """Book a table with form-based elicitation for alternatives.""" if date == "2024-12-25": # Date unavailable - ask user for alternative using form elicitation result = await ctx.elicit( message=f"No tables available for {party_size} on {date}. Try another date?", schema=BookingPreferences, ) if result.action == "accept" and result.data: if result.data.checkAlternative: return f"Booked for {result.data.alternativeDate}" return "No booking made" return "Booking cancelled" return f"Booked for {date} at {time}" @mcp.tool() async def secure_payment(amount: float, ctx: Context) -> str: """Process payment using URL-based elicitation for security.""" elicitation_id = str(uuid.uuid4()) # Direct user to external URL for sensitive operation result = await ctx.elicit_url( message=f"Please confirm payment of ${amount:.2f}", url=f"https://payments.example.com/confirm?amount={amount}&id={elicitation_id}", elicitation_id=elicitation_id, ) if result.action == "accept": return f"Payment of ${amount:.2f} initiated" elif result.action == "decline": return "Payment declined" return "Payment cancelled" ``` ## Lifespan Management Use the lifespan context manager to initialize resources on server startup and clean them up on shutdown, with type-safe context access in handlers. ```python from collections.abc import AsyncIterator from contextlib import asynccontextmanager from dataclasses import dataclass from mcp.server.mcpserver import Context, MCPServer class Database: """Mock database class.""" @classmethod async def connect(cls) -> "Database": print("Database connected") return cls() async def disconnect(self) -> None: print("Database disconnected") def query(self, sql: str) -> str: return f"Result for: {sql}" @dataclass class AppContext: """Application context with typed dependencies.""" db: Database @asynccontextmanager async def app_lifespan(server: MCPServer) -> AsyncIterator[AppContext]: """Manage application lifecycle with type-safe context.""" db = await Database.connect() try: yield AppContext(db=db) finally: await db.disconnect() # Pass lifespan to server mcp = MCPServer("My App", lifespan=app_lifespan) @mcp.tool() def query_db(sql: str, ctx: Context[AppContext]) -> str: """Tool that uses initialized database from lifespan context.""" db = ctx.request_context.lifespan_context.db return db.query(sql) ``` ## LLM Sampling from Tools Tools can request LLM completions through the session's sampling capability, enabling recursive LLM interactions. ```python from mcp.server.mcpserver import Context, MCPServer from mcp.types import SamplingMessage, TextContent mcp = MCPServer(name="Sampling Example") @mcp.tool() async def generate_poem(topic: str, ctx: Context) -> str: """Generate a poem by requesting LLM sampling.""" prompt = f"Write a short poem about {topic}" result = await ctx.session.create_message( messages=[ SamplingMessage( role="user", content=TextContent(type="text", text=prompt), ) ], max_tokens=100, ) if result.content.type == "text": return result.content.text return str(result.content) @mcp.tool() async def summarize_text(text: str, ctx: Context) -> str: """Summarize text using LLM sampling.""" result = await ctx.session.create_message( messages=[ SamplingMessage( role="user", content=TextContent(type="text", text=f"Summarize: {text}"), ) ], max_tokens=50, ) return result.content.text if result.content.type == "text" else str(result.content) ``` ## ClientSession - Connecting to MCP Servers The `ClientSession` class provides a high-level interface for connecting to and interacting with MCP servers using various transports. ```python import asyncio from mcp import ClientSession, StdioServerParameters, types from mcp.client.stdio import stdio_client from mcp.client.streamable_http import streamable_http_client # Example 1: Connect via stdio transport async def stdio_example(): server_params = StdioServerParameters( command="python", args=["my_mcp_server.py"], ) async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: # Initialize the connection await session.initialize() # List available tools tools = await session.list_tools() print(f"Available tools: {[t.name for t in tools.tools]}") # Call a tool result = await session.call_tool("add", arguments={"a": 5, "b": 3}) print(f"Tool result: {result.content[0].text}") # List and read resources resources = await session.list_resources() if resources.resources: content = await session.read_resource(resources.resources[0].uri) print(f"Resource content: {content.contents[0]}") # List and get prompts prompts = await session.list_prompts() if prompts.prompts: prompt = await session.get_prompt( prompts.prompts[0].name, arguments={"name": "Alice"} ) print(f"Prompt: {prompt.messages[0].content}") # Example 2: Connect via Streamable HTTP transport async def http_example(): async with streamable_http_client("http://localhost:8000/mcp") as (read, write): async with ClientSession(read, write) as session: await session.initialize() tools = await session.list_tools() print(f"Available tools: {[tool.name for tool in tools.tools]}") if __name__ == "__main__": asyncio.run(stdio_example()) ``` ## Low-Level Server API For fine-grained control over the protocol, use the low-level `Server` class with explicit handler registration. ```python import asyncio from mcp import types from mcp.server import Server, ServerRequestContext import mcp.server.stdio async def handle_list_tools( ctx: ServerRequestContext, params: types.PaginatedRequestParams | None ) -> types.ListToolsResult: """List available tools.""" return types.ListToolsResult( tools=[ types.Tool( name="calculate", description="Perform a calculation", input_schema={ "type": "object", "properties": { "expression": {"type": "string", "description": "Math expression"} }, "required": ["expression"], }, ) ] ) async def handle_call_tool( ctx: ServerRequestContext, params: types.CallToolRequestParams ) -> types.CallToolResult: """Handle tool calls.""" if params.name == "calculate": expression = params.arguments.get("expression", "0") try: result = eval(expression) # Note: Use safe evaluation in production return types.CallToolResult( content=[types.TextContent(type="text", text=str(result))] ) except Exception as e: return types.CallToolResult( content=[types.TextContent(type="text", text=f"Error: {e}")], is_error=True ) raise ValueError(f"Unknown tool: {params.name}") async def handle_list_prompts( ctx: ServerRequestContext, params: types.PaginatedRequestParams | None ) -> types.ListPromptsResult: """List available prompts.""" return types.ListPromptsResult( prompts=[ types.Prompt( name="code-review", description="Review code for issues", arguments=[ types.PromptArgument(name="code", description="Code to review", required=True) ], ) ] ) async def handle_get_prompt( ctx: ServerRequestContext, params: types.GetPromptRequestParams ) -> types.GetPromptResult: """Get a specific prompt.""" if params.name == "code-review": code = (params.arguments or {}).get("code", "") return types.GetPromptResult( description="Code review prompt", messages=[ types.PromptMessage( role="user", content=types.TextContent( type="text", text=f"Please review this code:\n\n{code}" ), ) ], ) raise ValueError(f"Unknown prompt: {params.name}") # Create server with handler callbacks server = Server( "example-server", on_list_tools=handle_list_tools, on_call_tool=handle_call_tool, on_list_prompts=handle_list_prompts, on_get_prompt=handle_get_prompt, ) async def run(): """Run the low-level server.""" async with mcp.server.stdio.stdio_server() as (read_stream, write_stream): await server.run( read_stream, write_stream, server.create_initialization_options(), ) if __name__ == "__main__": asyncio.run(run()) ``` ## OAuth Authentication The SDK supports OAuth 2.1 for protecting MCP servers, implementing the resource server pattern with token verification. ```python from pydantic import AnyHttpUrl from mcp.server.auth.provider import AccessToken, TokenVerifier from mcp.server.auth.settings import AuthSettings from mcp.server.mcpserver import MCPServer class MyTokenVerifier(TokenVerifier): """Custom token verifier implementation.""" async def verify_token(self, token: str) -> AccessToken | None: # Implement your token validation logic here # Return AccessToken if valid, None if invalid if token == "valid-token": return AccessToken( token=token, client_id="client-123", scopes=["user", "read"], expires_at=None, ) return None # Create authenticated MCP server mcp = MCPServer( "Protected Service", token_verifier=MyTokenVerifier(), auth=AuthSettings( issuer_url=AnyHttpUrl("https://auth.example.com"), resource_server_url=AnyHttpUrl("http://localhost:8000"), required_scopes=["user"], ), ) @mcp.tool() async def protected_operation(data: str) -> dict[str, str]: """This tool requires authentication.""" return {"status": "success", "data": data} if __name__ == "__main__": mcp.run(transport="streamable-http") ``` ## Mounting Multiple Servers with Starlette Multiple MCP servers can be mounted in a single Starlette/FastAPI application for complex deployments. ```python import contextlib from starlette.applications import Starlette from starlette.routing import Mount from mcp.server.mcpserver import MCPServer # Create multiple MCP servers echo_mcp = MCPServer(name="EchoServer") math_mcp = MCPServer(name="MathServer") @echo_mcp.tool() def echo(message: str) -> str: """Echo a message back""" return f"Echo: {message}" @math_mcp.tool() def add_numbers(a: int, b: int) -> int: """Add two numbers""" return a + b # Combined lifespan for both servers @contextlib.asynccontextmanager async def lifespan(app: Starlette): async with contextlib.AsyncExitStack() as stack: await stack.enter_async_context(echo_mcp.session_manager.run()) await stack.enter_async_context(math_mcp.session_manager.run()) yield # Create Starlette app with mounted servers app = Starlette( routes=[ Mount("/echo", echo_mcp.streamable_http_app()), Mount("/math", math_mcp.streamable_http_app()), ], lifespan=lifespan, ) # Run with: uvicorn app:app --host 0.0.0.0 --port 8000 # Clients connect to: http://localhost:8000/echo/mcp and http://localhost:8000/math/mcp ``` ## Completion Handler for Argument Suggestions Register completion handlers to provide argument suggestions for prompts and resource templates. ```python from mcp.server.mcpserver import MCPServer from mcp.types import Completion, PromptReference, ResourceTemplateReference mcp = MCPServer("Completion Example") @mcp.completion() async def handle_completion(ref, argument, context): """Provide completion suggestions based on the reference type.""" if isinstance(ref, ResourceTemplateReference): if argument.name == "owner": # Suggest GitHub organization names return Completion( values=["modelcontextprotocol", "anthropics", "openai"], total=3, has_more=False ) if argument.name == "repo" and context: # Suggest repos based on previously selected owner owner = context.arguments.get("owner", "") if owner == "modelcontextprotocol": return Completion(values=["python-sdk", "typescript-sdk", "servers"]) if isinstance(ref, PromptReference): if argument.name == "style": return Completion(values=["friendly", "formal", "casual"]) return None @mcp.resource("repo://{owner}/{repo}") def get_repo_info(owner: str, repo: str) -> str: """Get repository information.""" return f"Repository: {owner}/{repo}" @mcp.prompt() def greet(name: str, style: str = "friendly") -> str: """Generate greeting with style.""" return f"Write a {style} greeting for {name}" ``` The MCP Python SDK enables building sophisticated context providers for LLM applications, from simple single-tool servers to complex multi-service architectures. The high-level `MCPServer` class with decorators suits most use cases, while the low-level `Server` class provides complete protocol control for advanced scenarios. Common integration patterns include: building CLI tools that expose file system or database access, creating API wrappers that translate REST services into MCP tools, implementing authentication flows for secure enterprise deployments, and composing multiple specialized servers behind a unified interface. For production deployments, the Streamable HTTP transport with `stateless_http=True` and `json_response=True` provides optimal scalability for multi-node environments. The protocol supports capability negotiation, allowing servers to declare which features they support and clients to adapt accordingly. Progress reporting, logging, and elicitation features enable rich interactive experiences, while the sampling API allows tools to leverage LLM capabilities recursively for complex multi-step operations.