# LangGraph Multi-Agent Swarm LangGraph Multi-Agent Swarm is a Python library for building collaborative multi-agent systems where specialized AI agents dynamically hand off control to one another based on their expertise. The library enables developers to create sophisticated conversational AI applications where multiple agents work together seamlessly, with the system maintaining memory of which agent was last active across conversation turns. This architecture allows for building complex AI assistants that can route conversations to the most appropriate expert agent automatically. Built on top of LangGraph, the library provides out-of-the-box support for streaming, short-term and long-term memory, and human-in-the-loop patterns. The core concept is that each agent in the swarm is a specialized expert with specific tools and capabilities, and agents can transfer control to each other using handoff tools. The system tracks the active agent in state and automatically routes subsequent user messages to the last active agent, ensuring continuity in multi-turn conversations. ## APIs and Functions ### create_swarm - Create Multi-Agent Swarm System Creates a multi-agent swarm from a list of agents and returns a StateGraph that can be compiled and executed. This is the main entry point for building swarm systems. The function takes agents (typically created with `create_agent`), specifies which agent should be active by default, and optionally accepts custom state schemas and context schemas for advanced use cases. ```python from langchain_openai import ChatOpenAI from langgraph.checkpoint.memory import InMemorySaver from langchain.agents import create_agent from langgraph_swarm import create_handoff_tool, create_swarm model = ChatOpenAI(model="gpt-4o") def add(a: int, b: int) -> int: """Add two numbers""" return a + b # Create specialized agents with handoff tools alice = create_agent( model, tools=[add, create_handoff_tool(agent_name="Bob")], system_prompt="You are Alice, an addition expert.", name="Alice", ) bob = create_agent( model, tools=[create_handoff_tool(agent_name="Alice", description="Transfer to Alice, she can help with math")], system_prompt="You are Bob, you speak like a pirate.", name="Bob", ) # Create swarm and compile with checkpointer for memory checkpointer = InMemorySaver() workflow = create_swarm([alice, bob], default_active_agent="Alice") app = workflow.compile(checkpointer=checkpointer) # Execute with thread_id for conversation persistence config = {"configurable": {"thread_id": "1"}} turn_1 = app.invoke( {"messages": [{"role": "user", "content": "i'd like to speak to Bob"}]}, config, ) # Output: Bob responds in pirate speak, active_agent is now "Bob" turn_2 = app.invoke( {"messages": [{"role": "user", "content": "what's 5 + 7?"}]}, config, ) # Output: Bob transfers to Alice, Alice calculates 5+7=12 ``` ### create_handoff_tool - Create Agent-to-Agent Transfer Tool Creates a LangChain tool that enables one agent to hand off control to another agent. The tool automatically updates the parent graph state with the target agent name and appends a tool message to the conversation history. When an LLM calls this tool, it triggers a transfer of control to the specified agent using LangGraph's Command objects. ```python from langgraph_swarm import create_handoff_tool # Create handoff tool with default name and description transfer_to_bob = create_handoff_tool(agent_name="Bob") # Tool name: "transfer_to_bob" # Description: "Ask agent 'Bob' for help" # Create handoff tool with custom name and description transfer_to_specialist = create_handoff_tool( agent_name="specialist_agent", name="call_specialist", description="Transfer user to the specialist when advanced technical help is needed." ) # Use in agent creation from langchain_openai import ChatOpenAI from langchain.agents import create_agent model = ChatOpenAI(model="gpt-4o") general_agent = create_agent( model, tools=[transfer_to_specialist], system_prompt="You are a general assistant. Transfer to specialist for complex questions.", name="general_agent", ) specialist_agent = create_agent( model, tools=[create_handoff_tool(agent_name="general_agent")], system_prompt="You are a specialist. Handle complex technical questions.", name="specialist_agent", ) ``` ### add_active_agent_router - Add Routing Logic to StateGraph Adds conditional routing logic to a StateGraph that routes execution to the currently active agent. This function is called internally by `create_swarm` but can be used manually for advanced customization. It adds a conditional edge from the START node that reads the `active_agent` state field and routes to the appropriate agent node. ```python from langgraph.graph import StateGraph from langgraph.checkpoint.memory import InMemorySaver from langchain.agents import create_agent from langgraph_swarm import SwarmState, create_handoff_tool, add_active_agent_router model = "openai:gpt-4o" def add(a: int, b: int) -> int: """Add two numbers""" return a + b alice = create_agent( model, tools=[add, create_handoff_tool(agent_name="Bob")], system_prompt="You are Alice, an addition expert.", name="Alice", ) bob = create_agent( model, tools=[create_handoff_tool(agent_name="Alice")], system_prompt="You are Bob, you speak like a pirate.", name="Bob", ) # Manual swarm creation with custom routing workflow = ( StateGraph(SwarmState) .add_node(alice, destinations=("Bob",)) .add_node(bob, destinations=("Alice",)) ) # Add the active agent router workflow = add_active_agent_router( builder=workflow, route_to=["Alice", "Bob"], default_active_agent="Alice", ) # Compile and use checkpointer = InMemorySaver() app = workflow.compile(checkpointer=checkpointer) config = {"configurable": {"thread_id": "1"}} result = app.invoke( {"messages": [{"role": "user", "content": "Hello"}]}, config, ) ``` ### SwarmState - State Schema for Multi-Agent Systems The default state schema for swarm systems, extending LangGraph's MessagesState with an `active_agent` field. This schema tracks the conversation messages and which agent is currently handling the conversation. Users can extend this class to add custom state fields for their specific use cases. ```python from langgraph_swarm import SwarmState from typing_extensions import TypedDict, Annotated from langchain_core.messages import AnyMessage from langgraph.graph import add_messages # Using default SwarmState # Contains: messages (list[AnyMessage]) and active_agent (str | None) # Extending SwarmState with custom fields class CustomSwarmState(SwarmState): """Custom state with additional fields""" user_id: str task_description: str | None # Example: Custom state for tracking user context from collections import defaultdict RESERVATIONS = defaultdict(lambda: {"flight_info": {}, "hotel_info": {}}) class ReservationState(SwarmState): """State for customer support with reservations""" user_id: str # Use in dynamic prompts def make_prompt(base_system_prompt: str): def prompt(state: dict, config): user_id = config["configurable"].get("user_id") current_reservation = RESERVATIONS[user_id] system_prompt = ( base_system_prompt + f"\n\nUser's active reservation: {current_reservation}" ) return [{"role": "system", "content": system_prompt}] + state["messages"] return prompt ``` ### Customer Support Pattern - Specialized Booking Agents A practical implementation pattern for building customer support systems with multiple specialized agents. This example demonstrates flight and hotel booking agents that share reservation data and dynamically inject user context into prompts. ```python import datetime from collections import defaultdict from langchain_core.runnables import RunnableConfig from langchain_openai import ChatOpenAI from langchain.agents import create_agent from langgraph_swarm import create_handoff_tool, create_swarm model = ChatOpenAI(model="gpt-4o") # Shared reservation storage RESERVATIONS = defaultdict(lambda: {"flight_info": {}, "hotel_info": {}}) # Flight booking tools def search_flights(departure_airport: str, arrival_airport: str, date: str) -> list[dict]: """Search flights by airport codes and date (YYYY-MM-DD)""" return [ { "departure_airport": "BOS", "arrival_airport": "JFK", "airline": "Jet Blue", "date": date, "id": "1", } ] def book_flight(flight_id: str, config: RunnableConfig) -> str: """Book a flight by ID""" user_id = config["configurable"].get("user_id") RESERVATIONS[user_id]["flight_info"] = {"id": flight_id, "status": "booked"} return "Successfully booked flight" # Hotel booking tools def search_hotels(location: str) -> list[dict]: """Search hotels by city name""" return [{"location": "New York", "name": "McKittrick Hotel", "id": "1"}] def book_hotel(hotel_id: str, config: RunnableConfig) -> str: """Book a hotel by ID""" user_id = config["configurable"].get("user_id") RESERVATIONS[user_id]["hotel_info"] = {"id": hotel_id, "status": "booked"} return "Successfully booked hotel" # Dynamic prompt with user context def make_prompt(base_system_prompt: str): def prompt(state: dict, config: RunnableConfig) -> list: user_id = config["configurable"].get("user_id") current_reservation = RESERVATIONS[user_id] system_prompt = ( base_system_prompt + f"\n\nUser's active reservation: {current_reservation}" + f"\nToday is: {datetime.datetime.now()}" ) return [{"role": "system", "content": system_prompt}] + state["messages"] return prompt # Create handoff tools transfer_to_hotel = create_handoff_tool( agent_name="hotel_assistant", description="Transfer to hotel-booking assistant for hotel searches and bookings.", ) transfer_to_flight = create_handoff_tool( agent_name="flight_assistant", description="Transfer to flight-booking assistant for flight searches and bookings.", ) # Create specialized agents flight_assistant = create_agent( model, tools=[search_flights, book_flight, transfer_to_hotel], system_prompt=make_prompt("You are a flight booking assistant"), name="flight_assistant", ) hotel_assistant = create_agent( model, tools=[search_hotels, book_hotel, transfer_to_flight], system_prompt=make_prompt("You are a hotel booking assistant"), name="hotel_assistant", ) # Build and use the swarm builder = create_swarm([flight_assistant, hotel_assistant], default_active_agent="flight_assistant") app = builder.compile() config = {"configurable": {"thread_id": "user_123", "user_id": "user_123"}} response = app.invoke( {"messages": [{"role": "user", "content": "I need to book a flight from Boston to NYC tomorrow"}]}, config, ) ``` ### Research Assistant Pattern - Planning and Execution Agents A two-phase agent pattern for research tasks where a planner agent clarifies requirements and creates a plan, then hands off to a researcher agent for implementation. This pattern demonstrates sequential agent collaboration with the ability to request re-planning. ```python from langchain.chat_models import init_chat_model from langchain.agents import create_agent from langgraph_swarm import create_handoff_tool, create_swarm model = init_chat_model(model="gpt-4o", model_provider="openai") # Documentation fetching tool def fetch_doc(url: str) -> str: """Fetch documentation from a URL and return its contents""" # Implementation would fetch and parse the URL return "Documentation content..." # Handoff tools transfer_to_planner = create_handoff_tool( agent_name="planner_agent", description="Transfer to planner for clarifying questions or to request a new plan.", ) transfer_to_researcher = create_handoff_tool( agent_name="researcher_agent", description="Transfer to researcher to perform research and implement the solution.", ) # Planner prompt planner_prompt = """You are a planning assistant. Your role is to: 1. Ask clarifying questions about the user's request 2. Read relevant documentation using the fetch_doc tool 3. Create a structured plan with scope and recommended documentation URLs 4. Transfer to the researcher_agent when ready Recommended documentation: https://langchain-ai.github.io/langgraph/llms.txt Provide 3 most relevant documentation URLs for the task. """ # Researcher prompt researcher_prompt = """You are a research assistant. Your role is to: 1. Follow the planner's guidance and scope 2. Fetch recommended documentation using fetch_doc 3. Implement the solution based on the plan 4. Transfer back to planner_agent if you need clarification or replanning """ # Create agents planner_agent = create_agent( model, system_prompt=planner_prompt, tools=[fetch_doc, transfer_to_researcher], name="planner_agent", ) researcher_agent = create_agent( model, system_prompt=researcher_prompt, tools=[fetch_doc, transfer_to_planner], name="researcher_agent", ) # Build swarm starting with planner agent_swarm = create_swarm( [planner_agent, researcher_agent], default_active_agent="planner_agent" ) app = agent_swarm.compile() # Execute research task config = {"configurable": {"thread_id": "research_session_1"}} result = app.invoke( {"messages": [{"role": "user", "content": "Help me implement streaming in LangGraph"}]}, config, ) ``` ### Custom Handoff Tool with Additional Parameters Creating custom handoff tools that accept additional parameters populated by the LLM, such as task descriptions or context information to pass to the next agent. This enables richer communication between agents beyond just transferring control. ```python from typing import Annotated from langchain.tools import tool, BaseTool, InjectedToolCallId from langchain.messages import ToolMessage from langgraph.types import Command from langgraph.prebuilt import InjectedState def create_custom_handoff_tool(*, agent_name: str, name: str | None, description: str | None) -> BaseTool: @tool(name, description=description) def handoff_to_agent( # Additional parameter for LLM to populate task_description: Annotated[str, "Detailed description of what the next agent should do, including all relevant context."], # Injected parameters from LangGraph state: Annotated[dict, InjectedState], tool_call_id: Annotated[str, InjectedToolCallId], ): tool_message = ToolMessage( content=f"Successfully transferred to {agent_name}", name=name, tool_call_id=tool_call_id, ) messages = state["messages"] return Command( goto=agent_name, graph=Command.PARENT, update={ "messages": messages + [tool_message], "active_agent": agent_name, "task_description": task_description, }, ) return handoff_to_agent # Usage example from langchain_openai import ChatOpenAI from langchain.agents import create_agent from langgraph_swarm import create_swarm model = ChatOpenAI(model="gpt-4o") # Create custom handoff tool transfer_to_specialist = create_custom_handoff_tool( agent_name="specialist", name="transfer_to_specialist", description="Transfer to specialist with detailed task description" ) coordinator = create_agent( model, tools=[transfer_to_specialist], system_prompt="You are a coordinator. When transferring, provide detailed context.", name="coordinator", ) specialist = create_agent( model, tools=[], system_prompt="You are a specialist. Check 'task_description' in state for your assignment.", name="specialist", ) workflow = create_swarm([coordinator, specialist], default_active_agent="coordinator") app = workflow.compile() ``` ### Memory Integration - Short-term and Long-term Integrating LangGraph's memory capabilities into swarm systems for conversation persistence and cross-thread data storage. Short-term memory (checkpointer) is essential for maintaining conversation state, while long-term memory (store) enables data sharing across sessions. ```python from langgraph.checkpoint.memory import InMemorySaver from langgraph.store.memory import InMemoryStore from langchain_openai import ChatOpenAI from langchain.agents import create_agent from langgraph_swarm import create_handoff_tool, create_swarm model = ChatOpenAI(model="gpt-4o") # Create agents alice = create_agent( model, tools=[create_handoff_tool(agent_name="Bob")], system_prompt="You are Alice", name="Alice", ) bob = create_agent( model, tools=[create_handoff_tool(agent_name="Alice")], system_prompt="You are Bob", name="Bob", ) # SHORT-TERM MEMORY (Required for multi-turn conversations) # Stores conversation state and active agent between turns checkpointer = InMemorySaver() # LONG-TERM MEMORY (Optional) # Stores data across different conversation threads store = InMemoryStore() workflow = create_swarm([alice, bob], default_active_agent="Alice") # Compile with both memory types app = workflow.compile( checkpointer=checkpointer, # Required for conversation continuity store=store, # Optional for cross-session data ) # Use with thread_id for conversation persistence config = {"configurable": {"thread_id": "conversation_1"}} # Turn 1: User talks to Alice turn_1 = app.invoke( {"messages": [{"role": "user", "content": "Transfer me to Bob"}]}, config, ) # active_agent is now "Bob" # Turn 2: Continues with Bob automatically turn_2 = app.invoke( {"messages": [{"role": "user", "content": "Hello"}]}, config, ) # Bob responds because active_agent was preserved # Different conversation thread config_2 = {"configurable": {"thread_id": "conversation_2"}} turn_3 = app.invoke( {"messages": [{"role": "user", "content": "Hi"}]}, config_2, ) # Starts fresh with Alice (default_active_agent) ``` ### Context Schema - Configuring Workflow Context The `create_swarm` function supports a `context_schema` parameter that specifies the schema for the context object passed to the workflow. This allows you to define structured configuration that will be available to all agents in the swarm. ```python from typing_extensions import TypedDict from langchain.agents import create_agent from langgraph_swarm import create_swarm class WorkflowContext(TypedDict): """Context schema for the workflow""" user_id: str environment: str max_iterations: int # Create agents alice = create_agent( "openai:gpt-4o", tools=[], system_prompt="You are Alice", name="Alice", ) bob = create_agent( "openai:gpt-4o", tools=[], system_prompt="You are Bob", name="Bob", ) # Create swarm with context schema workflow = create_swarm( [alice, bob], default_active_agent="Alice", context_schema=WorkflowContext, ) app = workflow.compile() # Use with context in config config = { "configurable": { "thread_id": "1", "user_id": "user_123", "environment": "production", "max_iterations": 10, } } result = app.invoke( {"messages": [{"role": "user", "content": "Hello"}]}, config, ) ``` ## Summary LangGraph Multi-Agent Swarm provides a robust framework for building collaborative AI systems where multiple specialized agents work together seamlessly. The primary use cases include customer support systems with domain-specific agents (like separate flight and hotel booking assistants), research and planning workflows with distinct planning and execution phases, and any application requiring dynamic routing between AI specialists. The library handles the complexity of state management, agent transitions, and conversation memory, allowing developers to focus on defining agent behaviors and tools. Integration patterns center around three key concepts: creating specialized agents with `create_agent`, connecting them with `create_handoff_tool`, and orchestrating the system with `create_swarm`. The library integrates seamlessly with LangGraph's ecosystem, supporting streaming responses, memory persistence through checkpointers and stores, and human-in-the-loop patterns. Developers can extend the base functionality through custom state schemas for private agent message histories, custom handoff tools with additional parameters, dynamic prompts that inject context-specific information, and context schemas for structured workflow configuration. The architecture ensures type safety through automatic conversion of agent names to Literal types and provides flexible customization options while maintaining simple defaults for common use cases.