### Install HuggingFace Integration Source: https://docs.langchain.com/oss/python/langgraph/use-graph-api Install the necessary package for HuggingFace integration. ```shell pip install -U "langchain[huggingface]" ``` -------------------------------- ### Sync MongoDB Checkpointer Example Source: https://docs.langchain.com/oss/python/langgraph/add-memory Example of using the synchronous MongoDB checkpointer. Requires a running MongoDB instance. The `DB_URI` should point to your MongoDB connection string. This snippet demonstrates basic setup and model invocation. ```python from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.mongodb import MongoDBSaver # [!code highlight] model = init_chat_model(model="claude-haiku-4-5-20251001") DB_URI = "localhost:27017" ``` -------------------------------- ### Install app dependencies with uv Source: https://docs.langchain.com/oss/python/langgraph/local-server Synchronizes and installs the project's dependencies using uv after navigating to the app's root directory. ```bash cd path/to/your/app uv sync ``` -------------------------------- ### Install OpenAI Integration Source: https://docs.langchain.com/oss/python/langgraph/use-graph-api Install the necessary package for OpenAI integration. ```shell pip install -U "langchain[openai]" ``` -------------------------------- ### Install Project Dependencies Source: https://docs.langchain.com/oss/python/langgraph/studio Installs necessary LangChain packages using either pip or uv package managers. ```shell pip install langchain langchain-openai ``` ```shell uv add langchain langchain-openai ``` -------------------------------- ### Full LangGraph Agent Example Source: https://docs.langchain.com/oss/python/langgraph/quickstart A comprehensive example defining tools, model, state, LLM node, tool node, and conditional logic for building a LangGraph agent. ```python # Step 1: Define tools and model from langchain.tools import tool from langchain.chat_models import init_chat_model model = init_chat_model( "claude-sonnet-4-6", temperature=0 ) # Define tools @tool def multiply(a: int, b: int) -> int: """Multiply `a` and `b`. """ return a * b @tool def add(a: int, b: int) -> int: """Adds `a` and `b`. """ return a + b @tool def divide(a: int, b: int) -> float: """Divide `a` and `b`. """ return a / b # Augment the LLM with tools tools = [add, multiply, divide] tools_by_name = {tool.name: tool for tool in tools} model_with_tools = model.bind_tools(tools) # Step 2: Define state from langchain.messages import AnyMessage from typing_extensions import TypedDict, Annotated import operator class MessagesState(TypedDict): messages: Annotated[list[AnyMessage], operator.add] llm_calls: int # Step 3: Define model node from langchain.messages import SystemMessage def llm_call(state: dict): """LLM decides whether to call a tool or not""" return { "messages": [ model_with_tools.invoke( [ SystemMessage( content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." ) ] + state["messages"] ) ], "llm_calls": state.get('llm_calls', 0) + 1 } # Step 4: Define tool node from langchain.messages import ToolMessage def tool_node(state: dict): """Performs the tool call""" result = [] for tool_call in state["messages"][-1].tool_calls: tool = tools_by_name[tool_call["name"]] observation = tool.invoke(tool_call["args"]) result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"])) return {"messages": result} # Step 5: Define logic to determine whether to end from typing import Literal from langgraph.graph import StateGraph, START, END # Conditional edge function to route to the tool node or end based upon whether the LLM made a tool call def should_continue(state: MessagesState) -> Literal["tool_node", END]: """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" messages = state["messages"] last_message = messages[-1] # If the LLM makes a tool call, then perform an action if last_message.tool_calls: return "tool_node" # Otherwise, we stop (reply to the user) return END # Step 6: Build agent # Build workflow ``` -------------------------------- ### Install Google Gemini Integration Source: https://docs.langchain.com/oss/python/langgraph/use-graph-api Install the necessary package for Google Gemini integration. ```shell pip install -U "langchain[google-genai]" ``` -------------------------------- ### Install LangGraph SDK Source: https://docs.langchain.com/oss/python/langgraph/deploy Command to install the necessary Python SDK for interacting with LangGraph deployments. ```shell pip install langgraph-sdk ``` -------------------------------- ### Install AWS Integration Source: https://docs.langchain.com/oss/python/langgraph/use-graph-api Install the necessary package for AWS Bedrock integration. ```shell pip install -U "langchain[aws]" ``` -------------------------------- ### Async Postgres Checkpointer Example Source: https://docs.langchain.com/oss/python/langgraph/add-memory Example of using the asynchronous PostgreSQL checkpointer. Requires `psycopg` and `langgraph-checkpoint-postgres` packages. The `checkpointer.setup()` method should be called once before first use. A `thread_id` is used to manage conversation state. ```python from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver # [!code highlight] model = init_chat_model(model="claude-haiku-4-5-20251001") DB_URI = "postgresql://postgres:postgres@localhost:5442/postgres?sslmode=disable" async with AsyncPostgresSaver.from_conn_string(DB_URI) as checkpointer: # [!code highlight] # await checkpointer.setup() async def call_model(state: MessagesState): response = await model.ainvoke(state["messages"]) return {"messages": response} builder = StateGraph(MessagesState) builder.add_node(call_model) builder.add_edge(START, "call_model") graph = builder.compile(checkpointer=checkpointer) # [!code highlight] config = { "configurable": { "thread_id": "1" # [!code highlight] } } async for chunk in graph.astream( {"messages": [{"role": "user", "content": "hi! I'm bob"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() async for chunk in graph.astream( {"messages": [{"role": "user", "content": "what's my name?"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() ``` -------------------------------- ### Full LangGraph Streaming Example Source: https://docs.langchain.com/oss/python/langgraph/streaming A complete example demonstrating how to set up a LangGraph with a state, a node that generates a joke, and streams both 'updates' and 'custom' messages. This includes defining the state, the node function, and compiling the graph. ```python from typing import TypedDict from langgraph.graph import StateGraph, START, END from langgraph.config import get_stream_writer class State(TypedDict): topic: str joke: str def generate_joke(state: State): writer = get_stream_writer() writer({"status": "thinking of a joke..."}) return {"joke": f"Why did the {state['topic']} go to school? To get a sundae education!"} graph = ( StateGraph(State) .add_node(generate_joke) .add_edge(START, "generate_joke") .add_edge("generate_joke", END) .compile() ) for chunk in graph.stream( {"topic": "ice cream"}, stream_mode=["updates", "custom"], version="v2", ): if chunk["type"] == "updates": for node_name, state in chunk["data"].items(): print(f"Node {node_name} updated: {state}") elif chunk["type"] == "custom": print(f"Status: {chunk['data']['status']}") ``` -------------------------------- ### Install Anthropic Integration Source: https://docs.langchain.com/oss/python/langgraph/use-graph-api Install the necessary package for Anthropic integration. ```shell pip install -U "langchain[anthropic]" ``` -------------------------------- ### Install Postgres Dependencies Source: https://docs.langchain.com/oss/python/langgraph/add-memory Install the necessary Python packages for using the PostgreSQL store and checkpointing with LangGraph. ```bash pip install -U "psycopg[binary,pool]" langgraph langgraph-checkpoint-postgres ``` -------------------------------- ### Install LangGraph CLI with uv Source: https://docs.langchain.com/oss/python/langgraph/local-server Installs the LangGraph CLI and in-memory dependencies using uv. Requires Python 3.11 or higher. ```bash uv add "langgraph-cli[inmem]" ``` -------------------------------- ### Full Agent Example with LangGraph Functional API Source: https://docs.langchain.com/oss/python/langgraph/quickstart This snippet shows a complete example of building an agent. It includes defining tools (add, multiply, divide), initializing a chat model, and creating the agent's logic using @task and @entrypoint decorators. The agent iteratively calls tools based on the LLM's response until a final answer is reached. Use this for a comprehensive understanding of agent creation with LangGraph. ```python # Step 1: Define tools and model from langchain.tools import tool from langchain.chat_models import init_chat_model model = init_chat_model( "claude-sonnet-4-6", temperature=0 ) # Define tools @tool def multiply(a: int, b: int) -> int: """Multiply `a` and `b`. Args: a: First int b: Second int """ return a * b @tool def add(a: int, b: int) -> int: """Adds `a` and `b`. Args: a: First int b: Second int """ return a + b @tool def divide(a: int, b: int) -> float: """Divide `a` and `b`. Args: a: First int b: Second int """ return a / b # Augment the LLM with tools tools = [add, multiply, divide] tools_by_name = {tool.name: tool for tool in tools} model_with_tools = model.bind_tools(tools) from langgraph.graph import add_messages from langchain.messages import ( SystemMessage, HumanMessage, ToolCall, ) from langchain_core.messages import BaseMessage from langgraph.func import entrypoint, task # Step 2: Define model node @task def call_llm(messages: list[BaseMessage]): """LLM decides whether to call a tool or not""" return model_with_tools.invoke( [ SystemMessage( content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." ) ] + messages ) # Step 3: Define tool node @task def call_tool(tool_call: ToolCall): """Performs the tool call""" tool = tools_by_name[tool_call["name"]] return tool.invoke(tool_call) # Step 4: Define agent @entrypoint() def agent(messages: list[BaseMessage]): model_response = call_llm(messages).result() while True: if not model_response.tool_calls: break # Execute tools tool_result_futures = [ call_tool(tool_call) for tool_call in model_response.tool_calls ] tool_results = [fut.result() for fut in tool_result_futures] messages = add_messages(messages, [model_response, *tool_results]) model_response = call_llm(messages).result() messages = add_messages(messages, model_response) return messages # Invoke messages = [HumanMessage(content="Add 3 and 4.")] for chunk in agent.stream(messages, stream_mode="updates"): print(chunk) print("\n") ``` -------------------------------- ### Sync Postgres Checkpointer Example Source: https://docs.langchain.com/oss/python/langgraph/add-memory Example of using the synchronous PostgreSQL checkpointer. Requires `psycopg` and `langgraph-checkpoint-postgres` packages. The `checkpointer.setup()` method should be called once before first use. A `thread_id` is used to manage conversation state. ```python from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.postgres import PostgresSaver # [!code highlight] model = init_chat_model(model="claude-haiku-4-5-20251001") DB_URI = "postgresql://postgres:postgres@localhost:5442/postgres?sslmode=disable" with PostgresSaver.from_conn_string(DB_URI) as checkpointer: # [!code highlight] # checkpointer.setup() def call_model(state: MessagesState): response = model.invoke(state["messages"]) return {"messages": response} builder = StateGraph(MessagesState) builder.add_node(call_model) builder.add_edge(START, "call_model") graph = builder.compile(checkpointer=checkpointer) # [!code highlight] config = { "configurable": { "thread_id": "1" # [!code highlight] } } for chunk in graph.stream( {"messages": [{"role": "user", "content": "hi! I'm bob"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() for chunk in graph.stream( {"messages": [{"role": "user", "content": "what's my name?"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() ``` -------------------------------- ### Example Checkpoint Data Structure Source: https://docs.langchain.com/oss/python/langgraph/add-memory This is an example of the data structure returned when listing checkpoints. It includes configuration, checkpoint details, metadata, and parent configurations. ```python [ CheckpointTuple( config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f029ca3-1f029ca3-1f5b-6704-8004-820c16b69a5a'}}, checkpoint={ 'v': 3, 'ts': '2025-05-05T16:01:24.680462+00:00', 'id': '1f029ca3-1f029ca3-1f5b-6704-8004-820c16b69a5a', 'channel_versions': {'__start__': '00000000000000000000000000000005.0.5290678567601859', 'messages': '00000000000000000000000000000006.0.3205149138784782', 'branch:to:call_model': '00000000000000000000000000000006.0.14611156755133758'}, 'versions_seen': {'__input__': {}, '__start__': {'__start__': '00000000000000000000000000000004.0.5736472536395331'}, 'call_model': {'branch:to:call_model': '00000000000000000000000000000005.0.1410174088651449'}}, 'channel_values': {'messages': [HumanMessage(content="hi! I'm bob"), AIMessage(content='Hi Bob! How are you doing today? Is there anything I can help you with?'), HumanMessage(content="what's my name?"), AIMessage(content='Your name is Bob.')]}, }, metadata={'source': 'loop', 'writes': {'call_model': {'messages': AIMessage(content='Your name is Bob.')}}, 'step': 4, 'parents': {}, 'thread_id': '1'}, parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f029ca3-1790-6b0a-8003-baf965b6a38f'}}, pending_writes=[] ), CheckpointTuple( config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f029ca3-1790-6b0a-8003-baf965b6a38f'}}, checkpoint={ 'v': 3, 'ts': '2025-05-05T16:01:23.863421+00:00', 'id': '1f029ca3-1790-6b0a-8003-baf965b6a38f', 'channel_versions': {'__start__': '00000000000000000000000000000005.0.5290678567601859', 'messages': '00000000000000000000000000000006.0.3205149138784782', 'branch:to:call_model': '00000000000000000000000000000006.0.14611156755133758'}, 'versions_seen': {'__input__': {}, '__start__': {'__start__': '00000000000000000000000000000004.0.5736472536395331'}, 'call_model': {'branch:to:call_model': '00000000000000000000000000000005.0.1410174088651449'}}, 'channel_values': {'messages': [HumanMessage(content="hi! I'm bob"), AIMessage(content='Hi Bob! How are you doing today? Is there anything I can help you with?'), HumanMessage(content="what's my name?")], 'branch:to:call_model': None} }, metadata={'source': 'loop', 'writes': None, 'step': 3, 'parents': {}, 'thread_id': '1'}, parent_config={...}, pending_writes=[('8ab4155e-6b15-b885-9ce5-bed69a2c305c', 'messages', AIMessage(content='Your name is Bob.'))] ), CheckpointTuple( config={...}, checkpoint={ 'v': 3, 'ts': '2025-05-05T16:01:23.863173+00:00', 'id': '1f029ca3-1790-616e-8002-9e021694a0cd', 'channel_versions': {'__start__': '00000000000000000000000000000004.0.5736472536395331', 'messages': '00000000000000000000000000000003.0.7056767754077798', 'branch:to:call_model': '00000000000000000000000000000003.0.22059023329132854'}, 'versions_seen': {'__input__': {}, '__start__': {'__start__': '00000000000000000000000000000001.0.7040775356287469'}, 'call_model': {'branch:to:call_model': '00000000000000000000000000000002.0.9300422176788571'}}, 'channel_values': {'__start__': {'messages': [{'role': 'user', 'content': "what's my name?"}]}, 'messages': [HumanMessage(content="hi! I'm bob"), AIMessage(content='Hi Bob! How are you doing today? Is there anything I can help you with?')]} }, metadata={'source': 'input', 'writes': {'__start__': {'messages': [{'role': 'user', 'content': "what's my name?"}]}}, 'step': 2, 'parents': {}, 'thread_id': '1'}, parent_config={...}, pending_writes=[('24ba39d6-6db1-4c9b-f4c5-682aeaf38dcd', 'messages', [{'role': 'user', 'content': "what's my name?"}]), ('24ba39d6-6db1-4c9b-f4c5-682aeaf38dcd', 'branch:to:call_model', None)] ), CheckpointTuple( config={...}, checkpoint={ 'v': 3, 'ts': '2025-05-05T16:01:23.862295+00:00', 'id': '1f029ca3-178d-6f54-8001-d7b180db0c89', 'channel_versions': {'__start__': '00000000000000000000000000000002.0.18673090920108737', 'messages': '00000000000000000000000000000003.0.7056767754077798', 'branch:to:call_model': '00000000000000000000000000000003.0.22059023329132854'}, ``` -------------------------------- ### Install LangGraph CLI Source: https://docs.langchain.com/oss/python/langgraph/studio Installs the LangGraph command-line interface required to run a local agent server for Studio connectivity. ```shell pip install --upgrade "langgraph-cli[inmem]" ``` -------------------------------- ### Install MongoDB Checkpointer Source: https://docs.langchain.com/oss/python/langgraph/add-memory Install the necessary packages for using the MongoDB checkpointer. This includes `pymongo` for database interaction and `langgraph-checkpoint-mongodb` for the checkpointer implementation. ```bash pip install -U pymongo langgraph langgraph-checkpoint-mongodb ``` -------------------------------- ### Install LangGraph package Source: https://docs.langchain.com/oss/python/langgraph/install Installs the core LangGraph library. This is the primary package required to start building stateful, multi-actor applications with LLMs. ```bash pip install -U langgraph ``` ```bash uv add langgraph ``` -------------------------------- ### Define LangChain Agent Source: https://docs.langchain.com/oss/python/langgraph/studio Example implementation of a simple email agent using LangChain's create_agent factory function. ```python from langchain.agents import create_agent def send_email(to: str, subject: str, body: str): """Send an email""" email = { "to": to, "subject": subject, "body": body } # ... email sending logic return f"Email sent to {to}" agent = create_agent( "gpt-4.1", tools=[send_email], system_prompt="You are an email assistant. Always use the send_email tool.", ) ``` -------------------------------- ### Create a simple LangGraph agent Source: https://docs.langchain.com/oss/python/langgraph/overview A basic 'hello world' example demonstrating how to create a simple agent using LangGraph. It defines a mock LLM function, sets up a state graph, adds nodes and edges, compiles the graph, and invokes it with user input. ```python from langgraph.graph import StateGraph, MessagesState, START, END def mock_llm(state: MessagesState): return {"messages": [{"role": "ai", "content": "hello world"}]} graph = StateGraph(MessagesState) graph.add_node(mock_llm) graph.add_edge(START, "mock_llm") graph.add_edge("mock_llm", END) graph = graph.compile() graph.invoke({"messages": [{"role": "user", "content": "hi!"}]}) ``` -------------------------------- ### Sync Redis Checkpointer Example Source: https://docs.langchain.com/oss/python/langgraph/add-memory Use this for synchronous operations with Redis for state persistence. Ensure Redis is running and accessible. You may need to call `checkpointer.setup()` on first use. ```python from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.redis import RedisSaver # [!code highlight] model = init_chat_model(model="claude-haiku-4-5-20251001") DB_URI = "redis://localhost:6379" with RedisSaver.from_conn_string(DB_URI) as checkpointer: # [!code highlight] # checkpointer.setup() def call_model(state: MessagesState): response = model.invoke(state["messages"]) return {"messages": response} builder = StateGraph(MessagesState) builder.add_node(call_model) builder.add_edge(START, "call_model") graph = builder.compile(checkpointer=checkpointer) # [!code highlight] config = { "configurable": { "thread_id": "1" # [!code highlight] } } for chunk in graph.stream( {"messages": [{"role": "user", "content": "hi! I'm bob"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() for chunk in graph.stream( {"messages": [{"role": "user", "content": "what's my name?"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() ``` -------------------------------- ### Async Redis Checkpointer Example Source: https://docs.langchain.com/oss/python/langgraph/add-memory Use this for asynchronous operations with Redis for state persistence. Ensure Redis is running and accessible. You may need to call `checkpointer.setup()` on first use. ```python from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.redis.aio import AsyncRedisSaver # [!code highlight] model = init_chat_model(model="claude-haiku-4-5-20251001") DB_URI = "redis://localhost:6379" ``` -------------------------------- ### Install HuggingFace Integration for LangChain Source: https://docs.langchain.com/oss/python/langgraph/sql-agent Installs the necessary LangChain packages to enable HuggingFace model support via pip. ```shell pip install -U "langchain[huggingface]" ``` -------------------------------- ### Install app dependencies with pip Source: https://docs.langchain.com/oss/python/langgraph/local-server Installs the project's dependencies in editable mode using pip after navigating to the app's root directory. This ensures local changes are reflected. ```bash cd path/to/your/app pip install -e . ``` -------------------------------- ### Install LangGraph using uv Source: https://docs.langchain.com/oss/python/langgraph/overview Install the LangGraph library using the uv package manager. This is an alternative to pip for managing Python packages. ```bash uv add langgraph ``` -------------------------------- ### Implement useStream in Frontend Frameworks Source: https://docs.langchain.com/oss/python/langgraph/frontend/graph-execution Examples of initializing useStream and binding state to UI components in React, Vue, Svelte, and Angular. ```tsx import { useStream } from "@langchain/react"; const AGENT_URL = "http://localhost:2024"; export function PipelineChat() { const stream = useStream({ apiUrl: AGENT_URL, assistantId: "graph_execution_cards", }); return (
); } ``` ```vue ``` ```svelte
``` ```ts import { Component } from "@angular/core"; import { useStream } from "@langchain/angular"; const AGENT_URL = "http://localhost:2024"; @Component({ selector: "app-pipeline-chat", template: `
`, }) export class PipelineChatComponent { PIPELINE_NODES = PIPELINE_NODES; stream = useStream({ apiUrl: AGENT_URL, assistantId: "graph_execution_cards", }); } ``` -------------------------------- ### Extended Example: Filtering by Tags in a LangGraph Source: https://docs.langchain.com/oss/python/langgraph/streaming An extended example showing how to define a state, create tagged LLM models, and process streams, filtering for specific tags. Passing the config through explicitly is required for python < 3.11 for correct context var propagation. ```python from typing import TypedDict from langchain.chat_models import init_chat_model from langgraph.graph import START, StateGraph # The joke_model is tagged with "joke" joke_model = init_chat_model(model="gpt-4.1-mini", tags=["joke"]) # The poem_model is tagged with "poem" poem_model = init_chat_model(model="gpt-4.1-mini", tags=["poem"]) class State(TypedDict): topic: str joke: str poem: str async def call_model(state, config): topic = state["topic"] print("Writing joke...") # Note: Passing the config through explicitly is required for python < 3.11 # Since context var support wasn't added before then: https://docs.python.org/3/library/asyncio-task.html#creating-tasks # The config is passed through explicitly to ensure the context vars are propagated correctly # This is required for Python < 3.11 when using async code. Please see the async section for more details joke_response = await joke_model.ainvoke( [{"role": "user", "content": f"Write a joke about {topic}"}], config, ) print("\n\nWriting poem...") poem_response = await poem_model.ainvoke( [{"role": "user", "content": f"Write a short poem about {topic}"}], config, ) return {"joke": joke_response.content, "poem": poem_response.content} graph = ( StateGraph(State) .add_node(call_model) .add_edge(START, "call_model") .compile() ) # The stream_mode is set to "messages" to stream LLM tokens # The metadata contains information about the LLM invocation, including the tags async for chunk in graph.astream( {"topic": "cats"}, stream_mode="messages", version="v2", ): if chunk["type"] == "messages": msg, metadata = chunk["data"] if metadata["tags"] == ["joke"]: print(msg.content, end="|", flush=True) ``` -------------------------------- ### Install LangChain dependency Source: https://docs.langchain.com/oss/python/langgraph/install Installs the LangChain framework, which is commonly used alongside LangGraph for LLM orchestration and tool definition. Requires Python 3.10 or higher. ```bash pip install -U langchain ``` ```bash uv add langchain ``` -------------------------------- ### Install pytest for LangGraph testing Source: https://docs.langchain.com/oss/python/langgraph/test Command to install the pytest framework, which is the recommended testing tool for LangGraph projects. ```bash pip install -U pytest ``` -------------------------------- ### Install OpenAI Langchain Integration Source: https://docs.langchain.com/oss/python/langgraph/sql-agent Installs the necessary package for integrating OpenAI models with Langchain. This is a prerequisite for using OpenAI chat models. ```shell pip install -U "langchain[openai]" ``` -------------------------------- ### Install LangGraph using pip Source: https://docs.langchain.com/oss/python/langgraph/overview Install the LangGraph library using pip. This is the standard method for adding the package to your Python environment. ```bash pip install -U langgraph ``` -------------------------------- ### Install LangGraph CLI with pip Source: https://docs.langchain.com/oss/python/langgraph/local-server Installs the LangGraph CLI and in-memory dependencies using pip. Requires Python 3.11 or higher. ```bash pip install -U "langgraph-cli[inmem]" ``` -------------------------------- ### Sync MongoDB Checkpointer Example Source: https://docs.langchain.com/oss/python/langgraph/add-memory Use this for synchronous operations with MongoDB for state persistence. Ensure MongoDB is running and accessible. ```python from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.mongodb import MongoDBSaver model = init_chat_model(model="claude-haiku-4-5-20251001") DB_URI = "localhost:27017" with MongoDBSaver.from_conn_string(DB_URI) as checkpointer: # [!code highlight] def call_model(state: MessagesState): response = model.invoke(state["messages"]) return {"messages": response} builder = StateGraph(MessagesState) builder.add_node(call_model) builder.add_edge(START, "call_model") graph = builder.compile(checkpointer=checkpointer) # [!code highlight] config = { "configurable": { "thread_id": "1" # [!code highlight] } } for chunk in graph.stream( {"messages": [{"role": "user", "content": "hi! I'm bob"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() for chunk in graph.stream( {"messages": [{"role": "user", "content": "what's my name?"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() ``` -------------------------------- ### Async Memory Search with Redis Store Source: https://docs.langchain.com/oss/python/langgraph/add-memory Demonstrates asynchronous memory retrieval using `AsyncRedisStore`. This example shows how to query the store for relevant memories based on user input within an asynchronous LangGraph execution. ```python from langgraph.store.redis.aio import AsyncRedisStore # [!code highlight] ``` ```python from langgraph.runtime import Runtime # [!code highlight] ``` ```python async def call_model( # [!code highlight] ``` ```python runtime: Runtime[Context], # [!code highlight] ``` ```python user_id = runtime.context.user_id # [!code highlight] ``` ```python memories = await runtime.store.asearch(namespace, query=str(state["messages"][-1].content)) # [!code highlight] ``` ```python await runtime.store.aput(namespace, str(uuid.uuid4()), {"data": memory}) # [!code highlight] ``` ```python AsyncRedisStore.from_conn_string(DB_URI) as store, # [!code highlight] ``` ```python builder = StateGraph(MessagesState, context_schema=Context) # [!code highlight] ``` ```python store=store, # [!code highlight] ``` ```python context=Context(user_id="1"), # [!code highlight] ``` -------------------------------- ### Async PostgresStore and Saver for LangGraph Source: https://docs.langchain.com/oss/python/langgraph/add-memory Implement an asynchronous LangGraph using AsyncPostgresStore for data persistence and AsyncPostgresSaver for checkpointing. This example demonstrates context-aware message handling and memory storage. ```python from dataclasses import dataclass from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver from langgraph.store.postgres.aio import AsyncPostgresStore # [!code highlight] from langgraph.runtime import Runtime # [!code highlight] import uuid model = init_chat_model(model="claude-haiku-4-5-20251001") @dataclass class Context: user_id: str async def call_model( # [!code highlight] state: MessagesState, runtime: Runtime[Context], # [!code highlight] ): user_id = runtime.context.user_id # [!code highlight] namespace = ("memories", user_id) memories = await runtime.store.asearch(namespace, query=str(state["messages"][-1].content)) # [!code highlight] info = "\n".join([d.value["data"] for d in memories]) system_msg = f"You are a helpful assistant talking to the user. User info: {info}" # Store new memories if the user asks the model to remember last_message = state["messages"][-1] if "remember" in last_message.content.lower(): memory = "User name is Bob" await runtime.store.aput(namespace, str(uuid.uuid4()), {"data": memory}) # [!code highlight] response = await model.ainvoke( [{"role": "system", "content": system_msg}] + state["messages"] ) return {"messages": response} DB_URI = "postgresql://postgres:postgres@localhost:5442/postgres?sslmode=disable" async with ( AsyncPostgresStore.from_conn_string(DB_URI) as store, # [!code highlight] AsyncPostgresSaver.from_conn_string(DB_URI) as checkpointer, ): # await store.setup() # await checkpointer.setup() builder = StateGraph(MessagesState, context_schema=Context) # [!code highlight] builder.add_node(call_model) builder.add_edge(START, "call_model") graph = builder.compile( checkpointer=checkpointer, store=store, # [!code highlight] ) config = {"configurable": {"thread_id": "1"}} async for chunk in graph.astream( {"messages": [{"role": "user", "content": "Hi! Remember: my name is Bob"}]}, config, stream_mode="values", context=Context(user_id="1"), # [!code highlight] ): chunk["messages"][-1].pretty_print() config = {"configurable": {"thread_id": "2"}} async for chunk in graph.astream( {"messages": [{"role": "user", "content": "what is my name?"}]}, config, stream_mode="values", context=Context(user_id="1"), # [!code highlight] ): chunk["messages"][-1].pretty_print() ``` -------------------------------- ### Install Google Generative AI Langchain Integration Source: https://docs.langchain.com/oss/python/langgraph/sql-agent Installs the necessary package for integrating Google Generative AI models (like Gemini) with Langchain. Requires the `google-genai` package. ```shell pip install -U "langchain[google-genai]" ``` -------------------------------- ### Full example: summarize messages Source: https://docs.langchain.com/oss/python/langgraph/add-memory A complete implementation using SummarizationNode to manage conversation state and token limits automatically. ```python from typing import Any, TypedDict from langchain.chat_models import init_chat_model from langchain.messages import AnyMessage from langchain_core.messages.utils import count_tokens_approximately from langgraph.graph import StateGraph, START, MessagesState from langgraph.checkpoint.memory import InMemorySaver from langmem.short_term import SummarizationNode, RunningSummary # [!code highlight] model = init_chat_model("claude-sonnet-4-6") summarization_model = model.bind(max_tokens=128) class State(MessagesState): context: dict[str, RunningSummary] # [!code highlight] class LLMInputState(TypedDict): # [!code highlight] summarized_messages: list[AnyMessage] context: dict[str, RunningSummary] summarization_node = SummarizationNode( # [!code highlight] token_counter=count_tokens_approximately, model=summarization_model, max_tokens=256, max_tokens_before_summary=256, max_summary_tokens=128, ) def call_model(state: LLMInputState): # [!code highlight] response = model.invoke(state["summarized_messages"]) return {"messages": [response]} checkpointer = InMemorySaver() builder = StateGraph(State) builder.add_node(call_model) builder.add_node("summarize", summarization_node) # [!code highlight] builder.add_edge(START, "summarize") builder.add_edge("summarize", "call_model") graph = builder.compile(checkpointer=checkpointer) # Invoke the graph config = {"configurable": {"thread_id": "1"}} graph.invoke({"messages": "hi, my name is bob"}, config) graph.invoke({"messages": "write a short poem about cats"}, config) graph.invoke({"messages": "now do the same but for dogs"}, config) final_response = graph.invoke({"messages": "what's my name?"}, config) final_response["messages"][-1].pretty_print() print("\nSummary:", final_response["context"]["running_summary"].summary) ``` -------------------------------- ### Full example: Stream from subgraphs with v2 Source: https://docs.langchain.com/oss/python/langgraph/use-subgraphs Provides a comprehensive example of defining and compiling a parent graph with a subgraph, then streaming its outputs using `subgraphs=True` and `version='v2'`. It prints the namespace and data for each streamed update, demonstrating how subgraph data is integrated. ```python from typing_extensions import TypedDict from langgraph.graph.state import StateGraph, START # Define subgraph class SubgraphState(TypedDict): foo: str bar: str def subgraph_node_1(state: SubgraphState): return {"bar": "bar"} def subgraph_node_2(state: SubgraphState): # note that this node is using a state key ('bar') that is only available in the subgraph # and is sending update on the shared state key ('foo') return {"foo": state["foo"] + state["bar"]} subgraph_builder = StateGraph(SubgraphState) subgraph_builder.add_node(subgraph_node_1) subgraph_builder.add_node(subgraph_node_2) subgraph_builder.add_edge(START, "subgraph_node_1") subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2") subgraph = subgraph_builder.compile() # Define parent graph class ParentState(TypedDict): foo: str def node_1(state: ParentState): return {"foo": "hi! " + state["foo"]} builder = StateGraph(ParentState) builder.add_node("node_1", node_1) builder.add_node("node_2", subgraph) builder.add_edge(START, "node_1") builder.add_edge("node_1", "node_2") graph = builder.compile() for chunk in graph.stream( {"foo": "foo"}, stream_mode="updates", subgraphs=True, # [!code highlight] version="v2", # [!code highlight] ): if chunk["type"] == "updates": print(chunk["ns"], chunk["data"]) ``` -------------------------------- ### Async MongoDB Checkpointer Example Source: https://docs.langchain.com/oss/python/langgraph/add-memory Use this for asynchronous operations with MongoDB for state persistence. Ensure MongoDB is running and accessible. ```python from langchain.chat_models import init_chat_model from langgraph.graph import StateGraph, MessagesState, START from langgraph.checkpoint.mongodb.aio import AsyncMongoDBSaver # [!code highlight] model = init_chat_model(model="claude-haiku-4-5-20251001") DB_URI = "localhost:27017" async with AsyncMongoDBSaver.from_conn_string(DB_URI) as checkpointer: # [!code highlight] async def call_model(state: MessagesState): response = await model.ainvoke(state["messages"]) return {"messages": response} builder = StateGraph(MessagesState) builder.add_node(call_model) builder.add_edge(START, "call_model") graph = builder.compile(checkpointer=checkpointer) # [!code highlight] config = { "configurable": { "thread_id": "1" # [!code highlight] } } async for chunk in graph.astream( {"messages": [{"role": "user", "content": "hi! I'm bob"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() async for chunk in graph.astream( {"messages": [{"role": "user", "content": "what's my name?"}]}, config, # [!code highlight] stream_mode="values" ): chunk["messages"][-1].pretty_print() ``` -------------------------------- ### Install LangGraph and Dependencies Source: https://docs.langchain.com/oss/python/langgraph/agentic-rag Installs the necessary packages for building RAG agents with LangGraph, including LangGraph itself, OpenAI integration, Langchain community components, and text splitters. This command ensures all required libraries are available for the tutorial. ```bash pip install -U langgraph "langchain[openai]" langchain-community langchain-text-splitters bs4 ``` -------------------------------- ### Install LangChain and LangGraph Dependencies Source: https://docs.langchain.com/oss/python/langgraph/sql-agent Installs the necessary Python packages for LangChain, LangGraph, and LangChain Community. These libraries are fundamental for building LLM applications and agents. ```bash pip install langchain langgraph langchain-community ``` -------------------------------- ### Cycle in Pregel Graph Example - Python Source: https://docs.langchain.com/oss/python/langgraph/pregel Demonstrates creating a cycle in a Pregel graph where a node writes to a channel it subscribes to. Execution continues until a None value is written. This example uses EphemeralValue and ChannelWriteEntry. Dependencies include EphemeralValue, Pregel, NodeBuilder, and ChannelWriteEntry. ```python from langgraph.channels import EphemeralValue, ChannelWriteEntry from langgraph.pregel import Pregel, NodeBuilder example_node = ( NodeBuilder().subscribe_only("value") .do(lambda x: x + x if len(x) < 10 else None) .write_to(ChannelWriteEntry("value", skip_none=True)) ) app = Pregel( nodes={"example_node": example_node}, channels={ "value": EphemeralValue(str), }, input_channels=["value"], output_channels=["value"], ) app.invoke({"value": "a"}) ``` -------------------------------- ### Install Anthropic Langchain Integration Source: https://docs.langchain.com/oss/python/langgraph/sql-agent Installs the necessary package for integrating Anthropic models with Langchain. This is required for using Anthropic chat models like Claude. ```shell pip install -U "langchain[anthropic]" ```