Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
Strand Agents SDK
https://github.com/strands-agents/docs
Admin
Strands Agents is a framework for building and running AI agents with a model-driven approach,
...
Tokens:
362,329
Snippets:
2,258
Trust Score:
5.1
Update:
1 week ago
Context
Skills
Chat
Benchmark
87.8
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Strands Agents SDK Strands Agents is a simple yet powerful framework for building and running AI agents in Python and TypeScript. It provides a model-driven approach where agents autonomously reason, plan, and execute tools to accomplish complex tasks. The SDK supports multiple model providers (Amazon Bedrock, OpenAI, Anthropic, Google, Ollama, and more), custom tool creation, multi-agent orchestration patterns, session persistence, and comprehensive observability through OpenTelemetry integration. The framework is designed around the agent loop concept - a recursive cycle where the model receives input, reasons about the task, selects and executes tools, and iterates until producing a final response. This enables agents to handle multi-step tasks requiring external information or real-world side effects. Strands supports both single agents and multi-agent systems through Graph (deterministic DAG execution) and Swarm (autonomous collaboration) patterns, with built-in session management for persisting conversations across restarts. ## Creating a Basic Agent Create an agent with default settings using Amazon Bedrock as the model provider. The agent automatically handles the reasoning loop and tool execution. ```python from strands import Agent # Create a basic agent with default Bedrock model (Claude 4 Sonnet) agent = Agent() # Invoke the agent with a prompt result = agent("What is the capital of France?") # Access the response print(result.message) # The agent's text response print(result.stop_reason) # "end_turn", "tool_use", etc. print(result.metrics.get_summary()) # Performance metrics ``` ```typescript import { Agent } from '@strands-agents/sdk' // Create a basic agent const agent = new Agent() // Invoke the agent const result = await agent.invoke('What is the capital of France?') // Access the response console.log(result.message) // The agent's text response console.log(result.stopReason) // Stop reason ``` ## Creating Custom Tools with Decorators Define custom tools using the `@tool` decorator in Python or the `tool()` function in TypeScript. Tools are automatically available for the agent to use based on their descriptions. ```python from strands import Agent, tool @tool def weather_forecast(city: str, days: int = 3) -> str: """Get weather forecast for a city. Args: city: The name of the city days: Number of days for the forecast """ # Simulated weather API call return f"Weather forecast for {city}: Sunny, 72°F for the next {days} days" @tool def calculate_area(shape: str, radius: float = None, width: float = None, height: float = None) -> float: """Calculate area of a geometric shape. Args: shape: The shape type (circle or rectangle) radius: Radius for circle width: Width for rectangle height: Height for rectangle """ if shape == "circle" and radius: return 3.14159 * radius ** 2 elif shape == "rectangle" and width and height: return width * height return 0.0 # Create agent with custom tools agent = Agent(tools=[weather_forecast, calculate_area]) # Agent automatically selects appropriate tools result = agent("What's the weather in Seattle and calculate the area of a 5x10 rectangle") print(result.message) ``` ```typescript import { Agent, tool } from '@strands-agents/sdk' import { z } from 'zod' // Create a tool with Zod schema validation const weatherTool = tool({ name: 'weather_forecast', description: 'Get weather forecast for a city', inputSchema: z.object({ city: z.string().describe('The name of the city'), days: z.number().default(3).describe('Number of days for forecast'), }), handler: async ({ city, days }) => { return { forecast: `Weather for ${city}: Sunny, 72°F for ${days} days` } }, }) // Create a tool with JSON Schema (no runtime validation) const calculatorTool = tool({ name: 'calculator', description: 'Perform mathematical calculations', inputSchema: { type: 'object', properties: { expression: { type: 'string', description: 'Math expression to evaluate' }, }, required: ['expression'], }, handler: async ({ expression }) => { return { result: eval(expression) } }, }) const agent = new Agent({ tools: [weatherTool, calculatorTool] }) const result = await agent.invoke('What is 25 * 4?') ``` ## Using Different Model Providers Configure agents to use different LLM providers including Amazon Bedrock, OpenAI, Anthropic, and Google. ```python from strands import Agent from strands.models import BedrockModel, AnthropicModel, OpenAIModel # Amazon Bedrock (default) bedrock_model = BedrockModel( model_id="anthropic.claude-sonnet-4-20250514-v1:0", region_name="us-west-2", temperature=0.3, ) bedrock_agent = Agent(model=bedrock_model) # Anthropic direct API anthropic_model = AnthropicModel( model_id="claude-sonnet-4-20250514", api_key="your-anthropic-api-key", # Or set ANTHROPIC_API_KEY env var ) anthropic_agent = Agent(model=anthropic_model) # OpenAI openai_model = OpenAIModel( model_id="gpt-4o", api_key="your-openai-api-key", # Or set OPENAI_API_KEY env var ) openai_agent = Agent(model=openai_model) # Or simply pass a model ID string for Bedrock simple_agent = Agent(model="anthropic.claude-sonnet-4-20250514-v1:0") ``` ```typescript import { Agent, BedrockModel, OpenAIModel, GoogleModel } from '@strands-agents/sdk' // Amazon Bedrock const bedrockModel = new BedrockModel({ modelId: 'anthropic.claude-sonnet-4-20250514-v1:0', region: 'us-west-2', }) const bedrockAgent = new Agent({ model: bedrockModel }) // OpenAI const openaiModel = new OpenAIModel({ modelId: 'gpt-4o', apiKey: process.env.OPENAI_API_KEY, }) const openaiAgent = new Agent({ model: openaiModel }) // Google Gemini const googleModel = new GoogleModel({ modelId: 'gemini-2.0-flash', apiKey: process.env.GOOGLE_API_KEY, }) const googleAgent = new Agent({ model: googleModel }) ``` ## Streaming Responses with Async Iterators Stream agent responses in real-time for web applications using async iterators. ```python import asyncio from strands import Agent from strands_tools import calculator agent = Agent(tools=[calculator], callback_handler=None) async def stream_response(): prompt = "Calculate 25 * 48 and explain the result" async for event in agent.stream_async(prompt): if "data" in event: # Text chunks as they're generated print(event["data"], end="", flush=True) elif "current_tool_use" in event and event["current_tool_use"].get("name"): print(f"\n[Using tool: {event['current_tool_use']['name']}]") elif "result" in event: # Final result with metrics print(f"\nTotal tokens: {event['result'].metrics.accumulated_usage}") asyncio.run(stream_response()) ``` ```typescript import { Agent, tool } from '@strands-agents/sdk' import { z } from 'zod' const calculatorTool = tool({ name: 'calculator', description: 'Perform calculations', inputSchema: z.object({ expression: z.string() }), handler: async ({ expression }) => ({ result: eval(expression) }), }) const agent = new Agent({ tools: [calculatorTool] }) // Stream events during execution for await (const event of agent.stream('Calculate 25 * 48')) { if (event.type === 'text') { process.stdout.write(event.text) } else if (event.type === 'toolUse') { console.log(`\n[Tool: ${event.name}]`) } else if (event.type === 'result') { console.log('\nDone:', event.result.stopReason) } } ``` ## Structured Output with Schema Validation Get type-safe, validated responses using Pydantic models (Python) or Zod schemas (TypeScript). ```python from pydantic import BaseModel, Field from strands import Agent class PersonInfo(BaseModel): """Information about a person""" name: str = Field(description="Full name of the person") age: int = Field(description="Age in years", ge=0, le=150) occupation: str = Field(description="Current occupation") email: str = Field(description="Email address") agent = Agent() # Extract structured data from unstructured text result = agent( "John Smith is a 30-year-old software engineer at john.smith@example.com", structured_output_model=PersonInfo ) # Access typed, validated output person: PersonInfo = result.structured_output print(f"Name: {person.name}") # "John Smith" print(f"Age: {person.age}") # 30 print(f"Job: {person.occupation}") # "software engineer" print(f"Email: {person.email}") # "john.smith@example.com" ``` ```typescript import { Agent } from '@strands-agents/sdk' import { z } from 'zod' const PersonSchema = z.object({ name: z.string().describe('Full name of the person'), age: z.number().min(0).max(150).describe('Age in years'), occupation: z.string().describe('Current occupation'), email: z.string().email().describe('Email address'), }) const agent = new Agent({ structuredOutputSchema: PersonSchema }) const result = await agent.invoke( 'John Smith is a 30-year-old software engineer at john.smith@example.com' ) // Typed output with inference const person = result.structuredOutput console.log(`Name: ${person.name}`) // "John Smith" console.log(`Age: ${person.age}`) // 30 ``` ## Graph Multi-Agent Pattern Create deterministic multi-agent workflows where agents execute in a defined order based on dependencies. ```python from strands import Agent from strands.multiagent import GraphBuilder # Create specialized agents researcher = Agent( name="researcher", system_prompt="You are a research specialist. Gather information on the given topic." ) analyst = Agent( name="analyst", system_prompt="You analyze research findings and identify key insights." ) writer = Agent( name="writer", system_prompt="You write clear, concise reports based on analysis." ) # Build the graph with dependencies builder = GraphBuilder() builder.add_node(researcher, "research") builder.add_node(analyst, "analysis") builder.add_node(writer, "report") # Define execution order: research -> analysis -> report builder.add_edge("research", "analysis") builder.add_edge("analysis", "report") builder.set_entry_point("research") # Optional safety limits builder.set_execution_timeout(600) # 10 minute timeout graph = builder.build() # Execute the workflow result = graph("Research the impact of AI on healthcare and create a report") print(f"Status: {result.status}") print(f"Execution order: {[n.node_id for n in result.execution_order]}") print(f"Final report: {result.results['report'].result}") ``` ```typescript import { Agent, Graph } from '@strands-agents/sdk' const researcher = new Agent({ id: 'researcher', systemPrompt: 'You are a research specialist.', }) const analyst = new Agent({ id: 'analyst', systemPrompt: 'You analyze findings and identify insights.', }) const writer = new Agent({ id: 'writer', systemPrompt: 'You write clear reports.', }) const graph = new Graph({ nodes: [researcher, analyst, writer], edges: [ ['researcher', 'analyst'], ['analyst', 'writer'], ], maxSteps: 10, }) const result = await graph.invoke('Research AI in healthcare') console.log('Status:', result.status) console.log('Report:', result.results.get('writer')?.output) ``` ## Swarm Multi-Agent Pattern Create collaborative agent teams that autonomously coordinate and hand off tasks. ```python from strands import Agent from strands.multiagent import Swarm # Create specialized agents with descriptions researcher = Agent( name="researcher", system_prompt="You research topics thoroughly. Hand off to coder for implementation." ) coder = Agent( name="coder", system_prompt="You write clean, tested code. Hand off to reviewer for review." ) reviewer = Agent( name="reviewer", system_prompt="You review code for quality and security issues." ) # Create a swarm - agents coordinate autonomously swarm = Swarm( [researcher, coder, reviewer], entry_point=researcher, # Start with researcher max_handoffs=20, max_iterations=20, execution_timeout=900.0, # 15 minutes ) # Execute - agents hand off to each other as needed result = swarm("Design and implement a REST API for a todo app") print(f"Status: {result.status}") print(f"Agents involved: {[n.node_id for n in result.node_history]}") print(f"Execution time: {result.execution_time}ms") ``` ```typescript import { Agent, Swarm } from '@strands-agents/sdk' const researcher = new Agent({ id: 'researcher', systemPrompt: 'You research topics. Hand off to coder for implementation.', description: 'Research specialist', }) const coder = new Agent({ id: 'coder', systemPrompt: 'You write clean code. Hand off to reviewer when done.', description: 'Code implementation specialist', }) const reviewer = new Agent({ id: 'reviewer', systemPrompt: 'You review code quality and security.', description: 'Code review specialist', }) const swarm = new Swarm({ nodes: [researcher, coder, reviewer], start: 'researcher', maxSteps: 20, }) const result = await swarm.invoke('Design a REST API for todos') console.log('Final response:', result.output) ``` ## Session Management for Persistence Persist agent conversations across sessions using file or S3 storage. ```python from strands import Agent from strands.session.file_session_manager import FileSessionManager from strands.session.s3_session_manager import S3SessionManager # File-based persistence file_session = FileSessionManager( session_id="user-123", storage_dir="/path/to/sessions" ) agent = Agent(session_manager=file_session) # First conversation agent("My name is Alice and I'm a software engineer") agent("I work on machine learning projects") # Later... create new agent with same session agent2 = Agent(session_manager=FileSessionManager(session_id="user-123")) result = agent2("What's my name and what do I work on?") # Agent remembers: "Your name is Alice and you work on ML projects" # S3-based persistence for distributed systems s3_session = S3SessionManager( session_id="user-456", bucket="my-agent-sessions", prefix="production/", region_name="us-west-2" ) distributed_agent = Agent(session_manager=s3_session) ``` ```typescript import { Agent, SessionManager, FileStorage, S3Storage } from '@strands-agents/sdk' // File-based persistence const fileSession = new SessionManager({ sessionId: 'user-123', storage: { snapshot: new FileStorage({ baseDir: './sessions' }), }, }) const agent = new Agent({ sessionManager: fileSession }) await agent.invoke("My name is Alice") // S3-based persistence const s3Session = new SessionManager({ sessionId: 'user-456', storage: { snapshot: new S3Storage({ bucket: 'my-agent-sessions', prefix: 'production/', region: 'us-west-2', }), }, }) ``` ## MCP (Model Context Protocol) Integration Connect to MCP servers for extended tool capabilities. ```python from mcp import stdio_client, StdioServerParameters from strands import Agent from strands.tools.mcp import MCPClient # Create MCP client for AWS documentation server mcp_client = MCPClient(lambda: stdio_client( StdioServerParameters( command="uvx", args=["awslabs.aws-documentation-mcp-server@latest"] ) )) # Pass MCP client directly - lifecycle managed automatically agent = Agent(tools=[mcp_client]) result = agent("What is AWS Lambda and how does it work?") # Or use multiple MCP servers github_client = MCPClient( lambda: streamablehttp_client( url="https://api.githubcopilot.com/mcp/", headers={"Authorization": f"Bearer {os.getenv('MCP_PAT')}"} ), prefix="github" # Prefix tool names to avoid conflicts ) agent = Agent(tools=[mcp_client, github_client]) ``` ```typescript import { Agent, McpClient, StdioClientTransport } from '@strands-agents/sdk' const mcpClient = new McpClient({ applicationName: 'My Agent', applicationVersion: '1.0.0', transport: new StdioClientTransport({ command: 'npx', args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'], }), }) const agent = new Agent({ tools: [mcpClient] }) const result = await agent.invoke('List files in the current directory') ``` ## Tool Context and State Access Access agent context and shared state within tools for advanced use cases. ```python from strands import Agent, tool, ToolContext @tool(context=True) def personalized_greeting(name: str, tool_context: ToolContext) -> str: """Generate a personalized greeting using context. Args: name: Name of the person to greet """ # Access the invoking agent agent_name = tool_context.agent.name # Access invocation state passed during agent call user_preferences = tool_context.invocation_state.get("preferences", {}) language = user_preferences.get("language", "English") if language == "Spanish": return f"¡Hola {name}! Soy {agent_name}." return f"Hello {name}! I'm {agent_name}." @tool(context=True) def get_conversation_length(tool_context: ToolContext) -> int: """Get the current conversation length.""" return len(tool_context.agent.messages) agent = Agent(name="Assistant", tools=[personalized_greeting, get_conversation_length]) # Pass state that tools can access result = agent( "Greet Maria in her preferred language", preferences={"language": "Spanish"} ) ``` ```typescript import { Agent, tool, ToolContext } from '@strands-agents/sdk' import { z } from 'zod' const statefulTool = tool({ name: 'get_user_preference', description: 'Get user preference from agent state', inputSchema: z.object({ key: z.string() }), handler: async ({ key }, context: ToolContext) => { // Access agent state const value = context.agent.appState.get(key) return { key, value: value ?? 'not set' } }, }) const agent = new Agent({ tools: [statefulTool] }) // Set state before invocation agent.appState.set('theme', 'dark') agent.appState.set('language', 'en') const result = await agent.invoke("What is my theme preference?") ``` ## Class-Based Tools with Shared State Create tools within classes to share resources and maintain state. ```python from strands import Agent, tool class DatabaseTools: def __init__(self, connection_string: str): self.connection = self._connect(connection_string) self.query_cache = {} def _connect(self, conn_str: str): # Establish database connection return {"connected": True, "db": conn_str} @tool def query_database(self, sql: str) -> dict: """Execute a SQL query. Args: sql: The SQL query to execute """ if sql in self.query_cache: return {"cached": True, "results": self.query_cache[sql]} # Execute query using shared connection results = f"Results for: {sql}" self.query_cache[sql] = results return {"cached": False, "results": results} @tool def get_table_schema(self, table_name: str) -> dict: """Get schema for a database table. Args: table_name: Name of the table """ return {"table": table_name, "columns": ["id", "name", "created_at"]} # Instantiate and use db_tools = DatabaseTools("postgresql://localhost/mydb") agent = Agent(tools=[db_tools.query_database, db_tools.get_table_schema]) result = agent("Show me the schema for the users table and count all records") ``` ## Callback Handlers for Custom Event Processing Process agent events with custom callback handlers for logging, monitoring, or UI updates. ```python import logging from strands import Agent from strands_tools import calculator logger = logging.getLogger("agent_monitor") def monitoring_callback(**kwargs): """Custom callback for monitoring agent behavior.""" if "data" in kwargs: # Text generation event logger.info(f"Generated: {kwargs['data']}") elif "current_tool_use" in kwargs: tool = kwargs["current_tool_use"] logger.info(f"Tool invoked: {tool.get('name')} with {tool.get('input')}") elif "message" in kwargs: msg = kwargs["message"] logger.info(f"Message ({msg['role']}): {len(msg.get('content', []))} blocks") # Create agent with custom callback agent = Agent( tools=[calculator], callback_handler=monitoring_callback ) result = agent("Calculate 15% of 250") # Or disable console output entirely silent_agent = Agent(tools=[calculator], callback_handler=None) ``` ## Async Agent Operations Use async methods for non-blocking agent operations in web applications. ```python import asyncio from strands import Agent from strands_tools import calculator async def process_multiple_requests(): agent = Agent(tools=[calculator]) # Single async invocation result = await agent.invoke_async("What is 100 * 50?") print(f"Result: {result.message}") # Process multiple requests concurrently prompts = [ "Calculate 25 + 75", "What is 144 / 12?", "Compute 8 * 8" ] tasks = [agent.invoke_async(prompt) for prompt in prompts] results = await asyncio.gather(*tasks) for prompt, result in zip(prompts, results): print(f"{prompt} -> {result.message}") asyncio.run(process_multiple_requests()) ``` ```typescript import { Agent, tool } from '@strands-agents/sdk' import { z } from 'zod' const calculatorTool = tool({ name: 'calculator', description: 'Perform calculations', inputSchema: z.object({ expression: z.string() }), handler: async ({ expression }) => ({ result: eval(expression) }), }) async function processRequests() { const agent = new Agent({ tools: [calculatorTool] }) // Parallel execution const results = await Promise.all([ agent.invoke('Calculate 25 + 75'), agent.invoke('What is 144 / 12?'), agent.invoke('Compute 8 * 8'), ]) results.forEach((r, i) => console.log(`Result ${i + 1}:`, r.message)) } processRequests() ``` ## Debug Logging Enable debug logging to troubleshoot agent behavior. ```python import logging from strands import Agent # Enable Strands debug logging logging.getLogger("strands").setLevel(logging.DEBUG) logging.basicConfig( format="%(levelname)s | %(name)s | %(message)s", handlers=[logging.StreamHandler()] ) # For multi-agent systems logging.getLogger("strands.multiagent").setLevel(logging.DEBUG) agent = Agent() agent("Hello!") # Will output detailed debug logs ``` ## Summary Strands Agents SDK provides a comprehensive framework for building AI agents that can reason, use tools, and collaborate. The primary use cases include building conversational assistants with tool capabilities, creating multi-step automation workflows, orchestrating teams of specialized agents for complex tasks, and persisting agent state for long-running applications. The SDK's model-driven approach means agents autonomously decide when and how to use tools based on natural language instructions. Integration patterns center around the Agent class as the core abstraction, with tools extending capabilities through decorators or the tool() function. Multi-agent systems use Graph for deterministic workflows with defined dependencies, or Swarm for autonomous collaboration where agents hand off tasks to each other. Session managers enable persistence across application restarts using file or S3 storage. MCP integration allows connecting to external tool servers following the Model Context Protocol standard. All patterns support both synchronous and asynchronous execution, with streaming available for real-time response processing in web applications.