### Setup: Install LangGraph and AutoGen Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary libraries, LangGraph and AutoGen, using pip. This is a prerequisite for running the integration examples. ```python %pip install autogen langgraph ``` -------------------------------- ### Install LangGraph Source: https://langchain-ai.github.io/langgraph/llms-full Installs the LangGraph library using pip. This is the initial setup step for using LangGraph in your Python projects. ```bash pip install -U langgraph ``` -------------------------------- ### Create a Basic LangGraph Hello World Example in Python Source: https://langchain-ai.github.io/langgraph/index This example demonstrates how to set up a minimal LangGraph. It defines a simple mock LLM node, constructs a StateGraph, adds the node and edges, compiles the graph, and then invokes it with an initial user message to get a 'hello world' response. ```python from langgraph.graph import StateGraph, MessagesState, START, END def mock_llm(state: MessagesState): return {"messages": [{"role": "ai", "content": "hello world"}]} graph = StateGraph(MessagesState) graph.add_node(mock_llm) graph.add_edge(START, "mock_llm") graph.add_edge("mock_llm", END) graph = graph.compile() graph.invoke({"messages": [{"role": "user", "content": "hi!"}]}) ``` -------------------------------- ### Extended Simple Workflow Example Source: https://langchain-ai.github.io/langgraph/llms-full An extended example showcasing a simple workflow with two tasks: checking if a number is even and formatting a message based on the result. It includes setup for tasks, an entrypoint, and workflow execution with a checkpointer. ```Python import uuid from langgraph.func import entrypoint, task from langgraph.checkpoint.memory import InMemorySaver # Task that checks if a number is even @task def is_even(number: int) -> bool: return number % 2 == 0 # Task that formats a message @task def format_message(is_even: bool) -> str: return "The number is even." if is_even else "The number is odd." # Create a checkpointer for persistence checkpointer = InMemorySaver() @entrypoint(checkpointer=checkpointer) def workflow(inputs: dict) -> str: """Simple workflow to classify a number.""" even = is_even(inputs["number"]).result() return format_message(even).result() # Run the workflow with a unique thread ID config = {"configurable": {"thread_id": str(uuid.uuid4())}} result = workflow.invoke({"number": 7}, config=config) print(result) ``` -------------------------------- ### Install LangGraph CLI with uvx Source: https://langchain-ai.github.io/langgraph/llms-full Installs and runs the LangGraph CLI using uvx, a tool for managing Python versions and environments. This is the recommended method for installation. ```bash uvx --from "langgraph-cli[inmem]" langgraph dev --help ``` -------------------------------- ### Install LangGraph and Anthropic Packages Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary LangGraph and LangChain Anthropic packages using pip. This is a prerequisite for running the multi-agent example. ```Python # %%capture --no-stderr # %pip install -U langgraph langchain-anthropic ``` -------------------------------- ### Install Packages and Set Environment Variables Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary LangGraph and Langchain-OpenAI packages and sets the OPENAI_API_KEY environment variable using a secure prompt. ```shell pip install -U langgraph langchain-openai ``` -------------------------------- ### Install LangGraph and Dependencies Source: https://langchain-ai.github.io/langgraph/llms-full Installs necessary packages for LangGraph development, including `langgraph`, `langchain-openai`, and `langmem`, using pip. ```shell pip install -U langgraph langchain-openai langmem ``` -------------------------------- ### Install langchain-mcp-adapters Source: https://langchain-ai.github.io/langgraph/llms-full Installs the `langchain-mcp-adapters` library, which is required to use MCP tools within LangGraph agents. ```bash pip install langchain-mcp-adapters ``` -------------------------------- ### Create New LangGraph Project Source: https://langchain-ai.github.io/langgraph/llms-full Initializes a new LangGraph application from a template. This command can be run directly or via `uv` for recommended installation. ```bash langgraph new ``` ```bash uvx --from "langgraph-cli[inmem]" langgraph new ``` -------------------------------- ### Specify Entry Point in LangGraph Source: https://langchain-ai.github.io/langgraph/llms-full Illustrates how to define the starting node of the graph by adding an edge from the special START node to the initial node using `add_edge`. This sets the first node to be executed when the graph begins. ```Python from langgraph.graph import START graph.add_edge(START, "node_a") ``` -------------------------------- ### Example: Accessing Long-Term Memory Source: https://langchain-ai.github.io/langgraph/llms-full Provides a complete example of setting up an `InMemoryStore`, populating it with user data, defining a tool to retrieve this data, and invoking an agent with the store configured. This illustrates the end-to-end process of using long-term memory. ```python from langchain_core.runnables import RunnableConfig from langchain_core.tools import tool from langgraph.config import get_store from langgraph.prebuilt import create_react_agent from langgraph.store.memory import InMemoryStore store = InMemoryStore() # (1)! store.put( # (2)! ("users",), # (3)! "user_123", # (4)! { "name": "John Smith", "language": "English", } # (5)! ) @tool def get_user_info(config: RunnableConfig) -> str: """Look up user info.""" # Same as that provided to `create_react_agent` store = get_store() # (6)! user_id = config["configurable"].get("user_id") user_info = store.get(("users",), user_id) # (7)! return str(user_info.value) if user_info else "Unknown user" agent = create_react_agent( model="anthropic:claude-3-7-sonnet-latest", tools=[get_user_info], store=store # (8)! ) # Run the agent agent.invoke( {"messages": [{"role": "user", "content": "look up user information"}]}, config={"configurable": {"user_id": "user_123"}} ) ``` -------------------------------- ### Install LangGraph CLI Source: https://langchain-ai.github.io/langgraph/llms-full Command to install or upgrade the LangGraph Platform command-line interface, enabling deployment capabilities. ```bash pip install -U langgraph-cli ``` -------------------------------- ### Install LangGraph and Anthropic Packages Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary LangGraph and Anthropic libraries using pip. This is the first step in setting up the environment for LangGraph development. ```shell pip install -U langgraph langchain_anthropic ``` -------------------------------- ### Install LangGraph SDK Source: https://langchain-ai.github.io/langgraph/llms-full Installs the LangGraph SDK using pip. This is the primary method for integrating LangGraph's Python SDK into your project. ```bash pip install langgraph-sdk ``` -------------------------------- ### Install LangGraph and Langchain Anthropic Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary packages for LangGraph and Langchain Anthropic using pip. ```shell pip install --quiet -U langgraph langchain_anthropic ``` -------------------------------- ### Install LangGraph and Langchain-OpenAI Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary packages for building LangGraph applications with OpenAI integration. This includes the core LangGraph library and the OpenAI provider for Langchain. ```shell pip install -U langgraph langchain-openai ``` -------------------------------- ### Initialize RemoteGraph Client Source: https://langchain-ai.github.io/langgraph/llms-full Demonstrates how to get LangGraph clients (async and sync) and initialize a RemoteGraph object using a deployment URL. ```python from langgraph_sdk import get_client, get_sync_client from langgraph.pregel.remote import RemoteGraph url = graph_name = "agent" client = get_client(url=url) sync_client = get_sync_client(url=url) remote_graph = RemoteGraph(graph_name, client=client, sync_client=sync_client) ``` -------------------------------- ### Attach Tools to Chat Model (Extended Example) Source: https://langchain-ai.github.io/langgraph/llms-full An extended example demonstrating how to attach tools to a chat model. It includes defining a tool, binding it to the model, invoking the model to get a tool call, and then invoking the tool with the extracted call. ```Python from langchain_core.tools import tool from langchain.chat_models import init_chat_model @tool def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b model = init_chat_model(model="claude-3-5-haiku-latest") model_with_tools = model.bind_tools([multiply]) response_message = model_with_tools.invoke("what's 42 x 7?") tool_call = response_message.tool_calls[0] multiply.invoke(tool_call) ``` -------------------------------- ### Install LangGraph CLI via Homebrew Source: https://langchain-ai.github.io/langgraph/llms-full Installs the LangGraph CLI using Homebrew, a package manager for macOS. This command downloads and installs the package and its dependencies. ```bash brew install langgraph-cli ``` -------------------------------- ### Install LangGraph and OpenAI Packages Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary Python packages for LangGraph and OpenAI integration, along with NumPy for potential numerical operations. ```shell pip install --quiet -U langgraph langchain_openai numpy ``` -------------------------------- ### Extended example: streaming LLM tokens from specific nodes Source: https://langchain-ai.github.io/langgraph/llms-full This extended example shows a complete LangGraph setup for writing a joke and a poem concurrently. It includes defining the state, nodes for writing, and edges, then demonstrates streaming tokens and filtering them to display only the poem's output. ```Python from typing import TypedDict from langgraph.graph import START, StateGraph from langchain_openai import ChatOpenAI model = ChatOpenAI(model="gpt-4o-mini") class State(TypedDict): topic: str joke: str poem: str def write_joke(state: State): topic = state["topic"] joke_response = model.invoke( [{"role": "user", "content": f"Write a joke about {topic}"}] ) return {"joke": joke_response.content} def write_poem(state: State): topic = state["topic"] poem_response = model.invoke( [{"role": "user", "content": f"Write a short poem about {topic}"}] ) return {"poem": poem_response.content} graph = ( StateGraph(State) .add_node(write_joke) .add_node(write_poem) # write both the joke and the poem concurrently .add_edge(START, "write_joke") .add_edge(START, "write_poem") .compile() ) for msg, metadata in graph.stream( {"topic": "cats"}, stream_mode="messages", ): if msg.content and metadata["langgraph_node"] == "write_poem": print(msg.content, end="|", flush=True) ``` -------------------------------- ### LangGraph Configuration Example Source: https://langchain-ai.github.io/langgraph/llms-full This JSON configuration file specifies dependencies, graphs, and environment variables for a LangGraph application. It includes local packages, specific LangChain packages, and points to a Python file for graph definition. ```json { "dependencies": ["langchain_openai", "./your_package"], "graphs": { "my_agent": "./your_package/your_file.py:agent" }, "env": "./.env" } ``` -------------------------------- ### Tool Execution Node Example Source: https://langchain-ai.github.io/langgraph/llms-full Provides a conceptual example of a tool-executing node that processes tool calls, collects Command objects, and returns them for state updates. ```python def call_tools(state): ... commands = [tools_by_name[tool_call["name"]].invoke(tool_call) for tool_call in tool_calls] return commands ``` -------------------------------- ### Install LangGraph Python Package Source: https://langchain-ai.github.io/langgraph/index This code snippet shows how to install the LangGraph library using pip. The '-U' flag ensures that the package is upgraded to the latest version if already installed. ```python pip install -U langgraph ``` -------------------------------- ### Example Memory Object Structure Source: https://langchain-ai.github.io/langgraph/llms-full Provides an example of the dictionary structure returned when retrieving memories from the store, including value, key, namespace, and timestamps. ```Python memories[-1].dict() {'value': {'food_preference': 'I like pizza'}, 'key': '07e0caf4-1631-47b7-b15f-65515d4c1843', 'namespace': ['1', 'memories'], 'created_at': '2024-10-02T17:22:31.590602+00:00', 'updated_at': '2024-10-02T17:22:31.590605+00:00'} ``` -------------------------------- ### Initialize ToolNode with Multiple Tools Source: https://langchain-ai.github.io/langgraph/llms-full Demonstrates how to initialize a ToolNode with a list of functions (tools) that can be called within a workflow. It shows the basic setup for invoking the ToolNode with a sample message. ```python from langgraph.prebuilt import ToolNode def get_weather(location: str): """Call to get the current weather.""" if location.lower() in ["sf", "san francisco"]: return "It's 60 degrees and foggy." else: return "It's 90 degrees and sunny." def get_coolest_cities(): """Get a list of coolest cities""" return "nyc, sf" tool_node = ToolNode([get_weather, get_coolest_cities]) tool_node.invoke({"messages": [...]}) ``` -------------------------------- ### Setup Environment Variables Source: https://langchain-ai.github.io/langgraph/tutorials/multi_agent/multi-agent-collaboration Sets up necessary API keys for Anthropic and Tavily by prompting the user if they are not already defined in the environment. This is crucial for the agents to access external services. ```Python import getpass import os def _set_if_undefined(var: str): if not os.environ.get(var): os.environ[var] = getpass.getpass(f"Please provide your {var}") _set_if_undefined("ANTHROPIC_API_KEY") _set_if_undefined("TAVILY_API_KEY") ``` -------------------------------- ### Install LangGraph Source: https://langchain-ai.github.io/langgraph/llms-full Installs or updates the LangGraph library to the latest version. This is a prerequisite for using LangGraph functionalities, including subgraphs. ```bash pip install -U langgraph ``` -------------------------------- ### Extended Example: Streaming from Subgraphs Source: https://langchain-ai.github.io/langgraph/llms-full A comprehensive example demonstrating how to define and integrate a subgraph into a parent graph, and then stream outputs from both, including namespace information for clarity. ```python from langgraph.graph import START, StateGraph from typing import TypedDict # Define subgraph class SubgraphState(TypedDict): foo: str # note that this key is shared with the parent graph state bar: str def subgraph_node_1(state: SubgraphState): return {"bar": "bar"} def subgraph_node_2(state: SubgraphState): return {"foo": state["foo"] + state["bar"]} subgraph_builder = StateGraph(SubgraphState) subgraph_builder.add_node(subgraph_node_1) subgraph_builder.add_node(subgraph_node_2) subgraph_builder.add_edge(START, "subgraph_node_1") subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2") subgraph = subgraph_builder.compile() # Define parent graph class ParentState(TypedDict): foo: str def node_1(state: ParentState): return {"foo": "hi! " + state["foo"]} builder = StateGraph(ParentState) builder.add_node("node_1", node_1) builder.add_node("node_2", subgraph) builder.add_edge(START, "node_1") builder.add_edge("node_1", "node_2") graph = builder.compile() for chunk in graph.stream( {"foo": "foo"}, stream_mode="updates", subgraphs=True, # (1)! ): print(chunk) ``` -------------------------------- ### Install LangGraph Dependencies for MCP Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary LangGraph API and SDK versions required for MCP functionality. This ensures compatibility and access to MCP features. ```bash pip install "langgraph-api>=0.2.3" "langgraph-sdk>=0.1.61" ``` -------------------------------- ### Run LangGraph Application Source: https://langchain-ai.github.io/langgraph/llms-full Starts the LangGraph application in development mode. This command is used after configuring the app and adding API keys. It can also be executed via `uv`. ```bash langgraph dev ``` ```bash uvx --from "langgraph-cli[inmem]" --with-editable . langgraph dev ``` -------------------------------- ### Install LangGraph CLI for Local Package Awareness Source: https://langchain-ai.github.io/langgraph/llms-full Installs the LangGraph CLI into the local virtual environment to resolve potential `ModuleNotFoundError` or `ImportError` issues when running `langgraph dev` after installing a local package. ```bash python -m pip install "langgraph-cli[inmem]" ``` -------------------------------- ### Install LangGraph CLI with pip Source: https://langchain-ai.github.io/langgraph/llms-full Installs the LangGraph CLI using pip, including in-memory support. This command ensures you have the necessary tools to create applications from templates. ```bash pip install "langgraph-cli[inmem]" --upgrade ``` -------------------------------- ### Initialize and Configure LangGraph StateGraph Source: https://langchain-ai.github.io/langgraph/llms-full Shows the basic setup for creating a LangGraph StateGraph, including initializing the builder with the defined State schema and adding individual nodes. ```python from langgraph.graph import START, StateGraph # Assuming State and step_1, step_2, step_3 are defined elsewhere builder = StateGraph(State) # Add nodes builder.add_node(step_1) builder.add_node(step_2) builder.add_node(step_3) ``` -------------------------------- ### Install LangGraph and LangChain OpenAI Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary LangGraph and LangChain OpenAI packages quietly using pip. This is a prerequisite for using LangGraph with OpenAI models. ```python %%capture --no-stderr %pip install --quiet -U langgraph langchain_openai ``` -------------------------------- ### Install LangGraph CLI via pip Source: https://langchain-ai.github.io/langgraph/llms-full Installs the LangGraph CLI using pip, a package installer for Python. This is a common method for installing Python packages and their dependencies. ```bash pip install langgraph-cli ``` -------------------------------- ### Install LangGraph and Langchain-Anthropic Source: https://langchain-ai.github.io/langgraph/llms-full Installs the necessary Python packages for LangGraph and Langchain-Anthropic using pip. This is a prerequisite for setting up the environment and running the LangGraph applications. ```shell pip install -U langgraph langchain-anthropic ``` -------------------------------- ### Using return_direct in Prebuilt LangGraph Agent Source: https://langchain-ai.github.io/langgraph/llms-full Shows an extended example of using `return_direct=True` within a prebuilt LangGraph agent. The `add` tool is configured to return directly, and the agent is invoked with a user query that triggers this tool. This illustrates how the immediate return functionality integrates with agent execution. ```python from langchain_core.tools import tool from langgraph.prebuilt import create_react_agent @tool(return_direct=True) def add(a: int, b: int) -> int: """Add two numbers""" return a + b # Mock create_react_agent for demonstration class MockAgent: def __init__(self, model, tools): self.model = model self.tools = tools def invoke(self, input_data): print(f"MockAgent invoked with: {input_data}") user_message = input_data['messages'][0]['content'] if "what's 3 + 5?" in user_message: # Find and call the 'add' tool with return_direct=True for tool in self.tools: if tool.__name__ == 'add': # Simulate parsing arguments and calling the tool # In a real scenario, the agent would parse '3 + 5' result = tool(a=3, b=5) print(f"Tool result (direct return): {result}") return result return "Unknown action" def create_react_agent(model, tools): return MockAgent(model, tools) agent = create_react_agent( model="anthropic:claude-3-7-sonnet-latest", tools=[add] ) agent.invoke( {"messages": [{"role": "user", "content": "what's 3 + 5?"}]} ) ``` -------------------------------- ### Subgraph with Shared State Schema (Full Example) Source: https://langchain-ai.github.io/langgraph/llms-full Provides a comprehensive example of using subgraphs with shared state schemas. It defines a subgraph with its own state and nodes, and then integrates it into a parent graph. The example illustrates how data flows between the subgraph and the parent graph through shared state keys. ```python from typing_extensions import TypedDict from langgraph.graph.state import StateGraph, START # Define subgraph class SubgraphState(TypedDict): foo: str # (1)! bar: str # (2)! def subgraph_node_1(state: SubgraphState): return {"bar": "bar"} def subgraph_node_2(state: SubgraphState): # note that this node is using a state key ('bar') that is only available in the subgraph # and is sending update on the shared state key ('foo') return {"foo": state["foo"] + state["bar"]} subgraph_builder = StateGraph(SubgraphState) subgraph_builder.add_node(subgraph_node_1) subgraph_builder.add_node(subgraph_node_2) subgraph_builder.add_edge(START, "subgraph_node_1") subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2") subgraph = subgraph_builder.compile() # Define parent graph class ParentState(TypedDict): foo: str def node_1(state: ParentState): return {"foo": "hi! " + state["foo"]} builder = StateGraph(ParentState) builder.add_node("node_1", node_1) builder.add_node("node_2", subgraph) builder.add_edge(START, "node_1") builder.add_edge("node_1", "node_2") graph = builder.compile() for chunk in graph.stream({"foo": "foo"}): print(chunk) ``` -------------------------------- ### Execute and Stream Graph Execution Source: https://langchain-ai.github.io/langgraph/llms-full Demonstrates how to execute a compiled LangGraph and stream the results. It includes a helper function to format the output and an example of calling the graph with user input. ```Python # Helper function for formatting the stream nicely def print_stream(stream): for s in stream: message = s["messages"][-1] if isinstance(message, tuple): print(message) else: message.pretty_print() inputs = {"messages": [("user", "what is the weather in sf")]} print_stream(graph.stream(inputs, stream_mode="values")) ``` -------------------------------- ### Python: Extended example for streaming arbitrary chat models with tool-calling Source: https://langchain-ai.github.io/langgraph/llms-full This extended Python example showcases streaming data from an arbitrary chat model (OpenAI's GPT-4o-mini) within a LangGraph, including tool-calling capabilities. It defines a state, a tool (`get_items`), and a graph node (`call_tool`) to handle tool execution and streaming responses. ```python import operator import json from typing import TypedDict from typing_extensions import Annotated from langgraph.graph import StateGraph, START from openai import AsyncOpenAI openai_client = AsyncOpenAI() model_name = "gpt-4o-mini" async def stream_tokens(model_name: str, messages: list[dict]): response = await openai_client.chat.completions.create( messages=messages, model=model_name, stream=True ) role = None async for chunk in response: delta = chunk.choices[0].delta if delta.role is not None: role = delta.role if delta.content: yield {"role": role, "content": delta.content} # this is our tool async def get_items(place: str) -> str: """Use this tool to list items one might find in a place you're asked about.""" writer = get_stream_writer() response = "" async for msg_chunk in stream_tokens( model_name, [ { "role": "user", "content": ( "Can you tell me what kind of items " f"i might find in the following place: '{place}'. " "List at least 3 such items separating them by a comma. " "And include a brief description of each item." ), } ], ): response += msg_chunk["content"] writer(msg_chunk) return response class State(TypedDict): messages: Annotated[list[dict], operator.add] # this is the tool-calling graph node async def call_tool(state: State): ai_message = state["messages"][-1] tool_call = ai_message["tool_calls"][-1] function_name = tool_call["function"]["name"] if function_name != "get_items": raise ValueError(f"Tool {function_name} not supported") function_arguments = tool_call["function"]["arguments"] arguments = json.loads(function_arguments) function_response = await get_items(**arguments) tool_message = { "tool_call_id": tool_call["id"], "role": "tool", "name": function_name, "content": function_response, } return {"messages": [tool_message]} graph = ( StateGraph(State) .add_node(call_tool) .add_edge(START, "call_tool") .compile() ) inputs = { "messages": [ { "content": None, "role": "assistant", "tool_calls": [ { "id": "1", "function": { "arguments": '{"place":"bedroom"}', "name": "get_items", }, "type": "function", } ], } ] } async for chunk in graph.astream( inputs, stream_mode="custom", ): print(chunk["content"], end="|", flush=True) ``` -------------------------------- ### Example: Updating Long-Term Memory Source: https://langchain-ai.github.io/langgraph/llms-full Illustrates updating long-term memory by setting up an `InMemoryStore` and demonstrating how to use the `put` method to store new data. This example is a precursor to showing how an agent might update this information. ```python from typing_extensions import TypedDict from langchain_core.tools import tool from langgraph.config import get_store from langchain_core.runnables import RunnableConfig from langgraph.prebuilt import create_react_agent from langgraph.store.memory import InMemoryStore store = InMemoryStore() # (1)! ``` -------------------------------- ### Environment Setup: Set OpenAI API Key Source: https://langchain-ai.github.io/langgraph/llms-full A Python function to securely set the OPENAI_API_KEY environment variable by prompting the user if it's not already set. This is crucial for authentication with OpenAI services. ```python import getpass import os def _set_env(var: str): if not os.environ.get(var): os.environ[var] = getpass.getpass(f"{var}: ") _set_env("OPENAI_API_KEY") ``` -------------------------------- ### Hierarchical Agent System Setup Source: https://langchain-ai.github.io/langgraph/llms-full Illustrates building a hierarchical agent system. It defines two teams, each with its own supervisor and agents, and then sets up a top-level supervisor to manage these teams. This structure allows for more complex and scalable multi-agent applications. ```python from typing import Literal from langchain_openai import ChatOpenAI from langgraph.graph import StateGraph, MessagesState, START, END from langgraph.types import Command # Assume 'model' is a pre-configured language model # model = ChatOpenAI() # Placeholder for the actual model invocation class MockModel: def invoke(self, prompt): class Response: content = "Mock response content" def __getitem__(self, key): if key == 'next_agent': return 'team_1_agent_1' if key == 'next_team': return 'team_1_graph' return None return Response() model = MockModel() # --- Define Team 1 --- def team_1_supervisor(state: MessagesState) -> Command[Literal["team_1_agent_1", "team_1_agent_2", END]]: response = model.invoke(...) # Placeholder for actual model call return Command(goto=response["next_agent"]) def team_1_agent_1(state: MessagesState) -> Command[Literal["team_1_supervisor"]]: response = model.invoke(...) # Placeholder for actual model call return Command(goto="team_1_supervisor", update={"messages": [response]}) def team_1_agent_2(state: MessagesState) -> Command[Literal["team_1_supervisor"]]: response = model.invoke(...) # Placeholder for actual model call return Command(goto="team_1_supervisor", update={"messages": [response]}) # Placeholder for Team1State if it's different from MessagesState class Team1State(MessagesState): pass team_1_builder = StateGraph(Team1State) team_1_builder.add_node("team_1_supervisor", team_1_supervisor) team_1_builder.add_node("team_1_agent_1", team_1_agent_1) team_1_builder.add_node("team_1_agent_2", team_1_agent_2) team_1_builder.add_edge(START, "team_1_supervisor") team_1_graph = team_1_builder.compile() # --- Define Team 2 --- # Define Team2State if it's different from MessagesState class Team2State(MessagesState): next: Literal["team_2_agent_1", "team_2_agent_2", "__end__"] def team_2_supervisor(state: Team2State): response = model.invoke(...) # Placeholder for actual model call return Command(goto=response["next"]) def team_2_agent_1(state: Team2State): response = model.invoke(...) # Placeholder for actual model call return Command(goto="team_2_supervisor", update={"messages": [response]}) def team_2_agent_2(state: Team2State): response = model.invoke(...) # Placeholder for actual model call return Command(goto="team_2_supervisor", update={"messages": [response]}) team_2_builder = StateGraph(Team2State) team_2_builder.add_node("team_2_supervisor", team_2_supervisor) team_2_builder.add_node("team_2_agent_1", team_2_agent_1) team_2_builder.add_node("team_2_agent_2", team_2_agent_2) team_2_builder.add_edge(START, "team_2_supervisor") team_2_graph = team_2_builder.compile() # --- Define Top-Level Supervisor --- builder = StateGraph(MessagesState) def top_level_supervisor(state: MessagesState) -> Command[Literal["team_1_graph", "team_2_graph", END]]: # you can pass relevant parts of the state to the LLM (e.g., state["messages"]) # to determine which team to call next. a common pattern is to call the model # with a structured output (e.g. force it to return an output with a "next_team" field) response = model.invoke(...) # Placeholder for actual model call # route to one of the teams or exit based on the supervisor's decision # if the supervisor returns "__end__", the graph will finish execution return Command(goto=response["next_team"]) builder.add_node("top_level_supervisor", top_level_supervisor) builder.add_node("team_1_graph", team_1_graph) builder.add_node("team_2_graph", team_2_graph) builder.add_edge(START, "top_level_supervisor") builder.add_edge("team_1_graph", "top_level_supervisor") builder.add_edge("team_2_graph", "top_level_supervisor") graph = builder.compile() ``` -------------------------------- ### Initialize LangGraph Components and Model Source: https://langchain-ai.github.io/langgraph/llms-full Initializes the Anthropic Chat model and LangGraph components like `create_react_agent`, `InMemorySaver`, and `entrypoint`. This setup is crucial for building and running the agent network. ```Python import uuid from langchain_core.messages import AIMessage from langchain_anthropic import ChatAnthropic from langgraph.prebuilt import create_react_agent from langgraph.graph import add_messages from langgraph.func import entrypoint, task from langgraph.checkpoint.memory import InMemorySaver from langgraph.types import interrupt, Command model = ChatAnthropic(model="claude-3-5-sonnet-latest") ``` -------------------------------- ### Simple Supervisor Agent Source: https://langchain-ai.github.io/langgraph/llms-full Defines two simple agent functions (agent_1, agent_2) that process state and return content. It then sets up a supervisor using `create_react_agent` to manage these agents, demonstrating a basic tool-calling agent setup. ```python from typing import Annotated from langchain_core.tools import tool from langgraph.graph import StateGraph, END from langgraph.checkpoint.memory import MemorySaver from langgraph.graph.agent import AgentExecutor, AgentState, START from langgraph.prebuilt import create_react_agent # Assume 'model' is a pre-configured language model # from langchain_openai import ChatOpenAI # model = ChatOpenAI() # Placeholder for the actual model invocation class MockModel: def invoke(self, prompt): class Response: content = "Mock response content" return Response() model = MockModel() # Define agent functions def agent_1(state: Annotated[dict, InjectedState]): # you can pass relevant parts of the state to the LLM (e.g., state["messages"]) # and add any additional logic (different models, custom prompts, structured output, etc.) response = model.invoke(...) # Placeholder for actual model call # return the LLM response as a string (expected tool response format) # this will be automatically turned to ToolMessage # by the prebuilt create_react_agent (supervisor) return response.content def agent_2(state: Annotated[dict, InjectedState]): response = model.invoke(...) # Placeholder for actual model call return response.content tools = [agent_1, agent_2] # the simplest way to build a supervisor w/ tool-calling is to use prebuilt ReAct agent graph # that consists of a tool-calling LLM node (i.e. supervisor) and a tool-executing node supervisor = create_react_agent(model, tools) ``` -------------------------------- ### LangGraph Map-Reduce with Send API Example Source: https://langchain-ai.github.io/langgraph/llms-full Demonstrates using LangGraph's Send API to implement a map-reduce pattern. It defines a state, nodes for generating topics and jokes, and uses conditional edges to fan out processing to multiple subjects. The example includes code for building and compiling the graph, visualizing it with Mermaid, and streaming execution. ```Python from langgraph.graph import StateGraph, START, END from langgraph.types import Send from typing_extensions import TypedDict, Annotated import operator class OverallState(TypedDict): topic: str subjects: list[str] jokes: Annotated[list[str], operator.add] best_selected_joke: str def generate_topics(state: OverallState): return {"subjects": ["lions", "elephants", "penguins"]} def generate_joke(state: OverallState): joke_map = { "lions": "Why don't lions like fast food? Because they can't catch it!", "elephants": "Why don't elephants use computers? They're afraid of the mouse!", "penguins": "Why don't penguins like talking to strangers at parties? Because they find it hard to break the ice." } return {"jokes": [joke_map[state["subject"]]]} def continue_to_jokes(state: OverallState): return [Send("generate_joke", {"subject": s}) for s in state["subjects"]] def best_joke(state: OverallState): return {"best_selected_joke": "penguins"} builder = StateGraph(OverallState) builder.add_node("generate_topics", generate_topics) builder.add_node("generate_joke", generate_joke) builder.add_node("best_joke", best_joke) builder.add_edge(START, "generate_topics") builder.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"]) builder.add_edge("generate_joke", "best_joke") builder.add_edge("best_joke", END) builder.add_edge("generate_topics", END) graph = builder.compile() ``` ```Python from IPython.display import Image, display display(Image(graph.get_graph().draw_mermaid_png())) ``` ```Python # Call the graph: here we call it to generate a list of jokes for step in graph.stream({"topic": "animals"}): print(step) ``` -------------------------------- ### LangGraph Platform Semantic Search Configuration Source: https://langchain-ai.github.io/langgraph/llms-full Shows a JSON configuration example for enabling semantic search in LangGraph Platform, specifying the embedding model and dimensions. ```JSON { "...": "...", "store": { "index": { "embed": "openai:text-embeddings-3-small", "dims": 1536, "fields": ["$"] } } } ``` -------------------------------- ### LangGraph Functional API: Install Dependencies Source: https://langchain-ai.github.io/langgraph/llms-full Installs the required Python packages for LangGraph development, including `langchain_anthropic`, `langchain_openai`, and `langgraph`. This command ensures all necessary libraries are available. ```shell pip install -U langchain_anthropic langchain_openai langgraph ``` -------------------------------- ### Extended Example: Streaming Workflow Updates (Sync) Source: https://langchain-ai.github.io/langgraph/llms-full An extended synchronous example demonstrating how to stream only the updates to the graph state after each node execution using `stream_mode='updates'`. It includes defining a state, nodes, and wiring them into a graph. ```python from typing import TypedDict from langgraph.graph import StateGraph, START, END class State(TypedDict): topic: str joke: str def refine_topic(state: State): return {"topic": state["topic"] + " and cats"} def generate_joke(state: State): return {"joke": f"This is a joke about {state['topic']}"} graph = ( StateGraph(State) .add_node(refine_topic) .add_node(generate_joke) .add_edge(START, "refine_topic") .add_edge("refine_topic", "generate_joke") .add_edge("generate_joke", END) .compile() ) for chunk in graph.stream( {"topic": "ice cream"}, stream_mode="updates", # (2)! ): print(chunk) ``` -------------------------------- ### LangGraph Tool Calling Agent Example Source: https://langchain-ai.github.io/langgraph/llms-full Demonstrates how to create a tool-calling agent in LangGraph. This agent can leverage LLM capabilities to select and use various tools for complex tasks, integrating concepts like memory and planning. ```Python from langgraph.graph import StateGraph, END from typing import List, Dict, Any # Assume 'llm' is an initialized language model with tool calling capabilities # Assume 'tools' is a list of available tools (functions) class AgentState(TypedDict): input: str intermediate_steps: List[tuple[str, str]] tool_result: Optional[str] = None next: Optional[str] = None def create_agent_graph(llm, tools): builder = StateGraph(AgentState) def call_llm(state: AgentState): # Logic to call the LLM with the current state and tools # The LLM's output will determine the next step (e.g., call a tool or return an answer) pass def call_tool(state: AgentState): # Logic to execute the tool selected by the LLM # The tool's output will be stored in state['tool_result'] pass builder.add_node("call_llm", call_llm) builder.add_node("call_tool", call_tool) # Define edges based on LLM output (e.g., if LLM decides to call a tool) builder.add_edge("call_llm", "call_tool") # Add logic for when the agent should finish or take other actions builder.add_edge("call_tool", "call_llm") # Example: loop back after tool execution builder.set_entry_point("call_llm") return builder.compile() # Example Usage: # from langchain_openai import ChatOpenAI # from langchain_core.tools import tool # # @tool # def search(query: str) -> str: # "Searches for a query." # return f"Results for {query}" # # llm = ChatOpenAI(model="gpt-4o") # graph = create_agent_graph(llm, [search]) # result = graph.invoke({"input": "What is the weather today?"}) ``` -------------------------------- ### Python Authentication Handler Example Source: https://langchain-ai.github.io/langgraph/llms-full Demonstrates an asynchronous authentication handler that returns user identity and permissions. It also includes functions to set default metadata and handle thread creation and read actions with permission checks. ```python from langchain_core.auths import Auth @auth.authenticate async def authenticate(headers: dict) -> Auth.types.MinimalUserDict: ... return { "identity": "user-123", "is_authenticated": True, "permissions": ["threads:write", "threads:read"] } def _default(ctx: Auth.types.AuthContext, value: dict): metadata = value.setdefault("metadata", {}) metadata["owner"] = ctx.user.identity return {"owner": ctx.user.identity} @auth.on.threads.create async def create_thread(ctx: Auth.types.AuthContext, value: dict): if "threads:write" not in ctx.permissions: raise Auth.exceptions.HTTPException( status_code=403, detail="Unauthorized" ) return _default(ctx, value) @auth.on.threads.read async def rbac_create(ctx: Auth.types.AuthContext, value: dict): if "threads:read" not in ctx.permissions and "threads:write" not in ctx.permissions: raise Auth.exceptions.HTTPException( status_code=403, detail="Unauthorized" ) return _default(ctx, value) ``` -------------------------------- ### Single Node Pregel Example Source: https://langchain-ai.github.io/langgraph/llms-full Demonstrates a basic Pregel application with a single node that doubles its input string. It uses EphemeralValue channels for input and output. ```python from langgraph.channels import EphemeralValue from langgraph.pregel import Pregel, NodeBuilder node1 = ( NodeBuilder().subscribe_only("a") .do(lambda x: x + x) .write_to("b") ) app = Pregel( nodes={"node1": node1}, channels={ "a": EphemeralValue(str), "b": EphemeralValue(str), }, input_channels=["a"], output_channels=["b"], ) app.invoke({"a": "foo"}) ``` -------------------------------- ### Python LangGraph Workflow Definition Source: https://langchain-ai.github.io/langgraph/tutorials/multi_agent/multi-agent-collaboration Sets up the LangGraph workflow by defining the state, adding nodes for the 'researcher' and 'chart_generator' agents, and specifying the initial edge from START to the 'researcher' node. The graph is then compiled for execution. ```Python from langgraph.graph import StateGraph, START # Assuming 'MessagesState', 'research_node', and 'chart_node' are defined elsewhere workflow = StateGraph(MessagesState) workflow.add_node("researcher", research_node) workflow.add_node("chart_generator", chart_node) workflow.add_edge(START, "researcher") graph = workflow.compile() ``` === COMPLETE CONTENT === This response contains all available snippets from this library. No additional content exists. Do not make further requests.