### Set up LangChain Documentation for Local Development (Bash) Source: https://docs.langchain.com/oss/python/contributing/documentation This series of commands provides a quick start guide for setting up the LangChain documentation repository locally. It involves cloning the repository, navigating into the directory, installing necessary dependencies, and starting a development server for live preview. ```bash git clone https://github.com/langchain-ai/docs.git cd docs make install make dev ``` -------------------------------- ### Configure Language Model with Parameters Source: https://docs.langchain.com/oss/python/langchain/quickstart Initializes a language model using `init_chat_model` with specific configuration parameters including temperature for response randomness, timeout for request limits, and max_tokens for response length. This setup ensures consistent and controlled model behavior for the agent. ```python from langchain.chat_models import init_chat_model model = init_chat_model( "claude-sonnet-4-5-20250929", temperature=0.5, timeout=10, max_tokens=1000 ) ``` -------------------------------- ### Full Example: Filesystem-based Claude Text Editor (Python) Source: https://docs.langchain.com/oss/python/integrations/middleware/anthropic This example demonstrates the setup of `FilesystemClaudeTextEditorMiddleware` using a temporary directory for the `root_path`. It shows how to initialize an agent that can interact with the local filesystem, enabling file creation and modification on disk. ```python import tempfile from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import FilesystemClaudeTextEditorMiddleware from langchain.agents import create_agent # Create a temporary workspace directory for this demo. # In production, use a persistent directory path. workspace = tempfile.mkdtemp(prefix="editor-workspace-") agent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), tools=[], middleware=[ ``` -------------------------------- ### Install sample data package for Graph RAG examples Source: https://docs.langchain.com/oss/python/integrations/retrievers/graph_rag Installs the `graph_rag_example_helpers` package, which provides sample animal data necessary for running the examples demonstrated in this guide. ```bash pip install -qU graph_rag_example_helpers ``` -------------------------------- ### Install and setup Runloop sandbox backend Source: https://docs.langchain.com/oss/python/deepagents/data-analysis Install langchain-runloop package and initialize a Runloop sandbox backend using the Runloop SDK with an API key for authentication. ```bash pip install langchain-runloop ``` ```bash uv add langchain-runloop ``` ```python from runloop_api_client import RunloopSDK from langchain_runloop import RunloopSandbox api_key = "..." client = RunloopSDK(bearer_token=api_key) devbox = client.devbox.create() backend = RunloopSandbox(devbox=devbox) ``` -------------------------------- ### Full Example: State-based Claude Text Editor (Python) Source: https://docs.langchain.com/oss/python/integrations/middleware/anthropic This comprehensive example illustrates the setup and usage of `StateClaudeTextEditorMiddleware` with an agent. It demonstrates how to configure `allowed_path_prefixes` and invoke the agent to create a file, with file contents managed within the LangGraph state. ```python from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import StateClaudeTextEditorMiddleware from langchain.agents import create_agent from langchain_core.runnables import RunnableConfig from langgraph.checkpoint.memory import MemorySaver agent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-5-20250929"), tools=[], middleware=[ StateClaudeTextEditorMiddleware( # [!code highlight] allowed_path_prefixes=["/project"], # [!code highlight] ), # [!code highlight] ], checkpointer=MemorySaver(), ) # Use a thread_id to persist state across invocations config: RunnableConfig = {"configurable": {"thread_id": "my-session"}} # Claude can now create and edit files (stored in LangGraph state) result = agent.invoke( {"messages": [{"role": "user", "content": "Create a file at /project/hello.py with a simple hello world program"}]}, config=config, ) print(result["messages"][-1].content) ``` ```text I've created a simple "Hello, World!" program at `/project/hello.py`. The program uses Python's `print()` function to display "Hello, World!" to the console when executed. ``` -------------------------------- ### Install LangChain Qwen Integration Package Source: https://docs.langchain.com/oss/python/integrations/chat/qwen This command installs the `langchain-qwq` package, which provides the necessary integration for using ChatQwen models with LangChain. The `-qU` flags ensure a quiet and upgraded installation. ```bash pip install -qU langchain-qwq ``` -------------------------------- ### Install and setup Daytona sandbox backend Source: https://docs.langchain.com/oss/python/deepagents/data-analysis Install langchain-daytona package and initialize a Daytona sandbox backend for sandboxed code execution. Includes verification of sandbox readiness. ```bash pip install langchain-daytona ``` ```bash uv add langchain-daytona ``` ```python from daytona import Daytona from langchain_daytona import DaytonaSandbox sandbox = Daytona().create() backend = DaytonaSandbox(sandbox=sandbox) ``` ```python result = backend.execute("echo ready") print(result) # ExecuteResponse(output='ready', exit_code=0, ...) ``` -------------------------------- ### Docstring with Minimal Example and Guide Link - Python Source: https://docs.langchain.com/oss/python/contributing/code Shows the recommended pattern for including a single minimal example in docstrings with a link to comprehensive guides for additional variations. This avoids docstring bloat while providing users with both quick reference and detailed documentation. ```python """ Example: ```python message = HumanMessage(content=[ {"type": "image", "url": "https://example.com/image.jpg"} ]) ``` See the [multimodal guide](https://docs.langchain.com/oss/integrations/chat/anthropic#multimodal) for all supported input formats. """ ``` -------------------------------- ### Define and Invoke a Single Node Pregel Application in Python Source: https://docs.langchain.com/oss/python/langgraph/pregel This example demonstrates the most basic Pregel setup with a single node. It shows how to define a node that subscribes to an input channel, performs an operation, and writes to an output channel, then invokes the application with an input. ```python from langgraph.channels import EphemeralValue from langgraph.pregel import Pregel, NodeBuilder node1 = ( NodeBuilder().subscribe_only("a") .do(lambda x: x + x) .write_to("b") ) app = Pregel( nodes={"node1": node1}, channels={ "a": EphemeralValue(str), "b": EphemeralValue(str), }, input_channels=["a"], output_channels=["b"], ) app.invoke({"a": "foo"}) ``` ```con {'b': 'foofoo'} ``` -------------------------------- ### Install LangGraph Python Library Source: https://docs.langchain.com/oss/python/langgraph This command installs the LangGraph Python library using pip, ensuring the latest version is installed. It is the first step to setting up a LangGraph development environment for building stateful agents and workflows. ```bash pip install -U langgraph ``` -------------------------------- ### Define LangChain Tool with Input Examples for Anthropic Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic This snippet demonstrates how to provide usage examples for complex tools using the `extras` parameter's `input_examples` field when defining a LangChain tool. These examples help Anthropic's Claude models better understand how to correctly invoke the tool based on user queries, improving tool reliability and performance. ```python from langchain_anthropic import ChatAnthropic from langchain.tools import tool @tool( extras={ "input_examples": [ { "query": "weather report", "location": "San Francisco", "format": "detailed" }, { "query": "temperature", "location": "New York", "format": "brief" } ] } ) def search_weather_data(query: str, location: str, format: str = "brief") -> str: """Search weather database with specific query and format preferences. Args: query: The type of weather information to retrieve location: City or region to search format: Output format, either 'brief' or 'detailed' """ return f"{format.title()} {query} for {location}: Data found" model = ChatAnthropic(model="claude-sonnet-4-5-20250929") model_with_tools = model.bind_tools([search_weather_data]) response = model_with_tools.invoke( "Get me a detailed weather report for Seattle" ) ``` -------------------------------- ### Building a LangGraph with StateGraph, add_node, and add_edge Source: https://docs.langchain.com/oss/python/langgraph/use-graph-api This comprehensive example demonstrates initializing a `StateGraph` with a defined state, adding individual node functions using `add_node`, and then explicitly defining the control flow between them using `add_edge`. It also sets the graph's entry point with `START`. ```python from langgraph.graph import START, StateGraph builder = StateGraph(State) # Add nodes builder.add_node(step_1) builder.add_node(step_2) builder.add_node(step_3) # Add edges builder.add_edge(START, "step_1") builder.add_edge("step_1", "step_2") builder.add_edge("step_2", "step_3") ``` -------------------------------- ### Install LangChain Core Library Source: https://docs.langchain.com/oss/python/integrations/chat/nvidia_ai_endpoints This command installs the `langchain` package with the `-qU` flags for quiet and upgrade. It ensures that the necessary core library is available in the Python environment for running the subsequent LangChain examples. ```bash pip install -qU langchain ``` -------------------------------- ### Create and Run Weather Forecasting Agent Source: https://docs.langchain.com/oss/python/langchain/quickstart Assembles the complete agent by combining the model, system prompt, tools, context schema, response format, and checkpointer. Then invokes the agent with a user query and thread configuration to maintain conversation state. Returns the structured response from the agent. ```python from langchain.agents.structured_output import ToolStrategy agent = create_agent( model=model, system_prompt=SYSTEM_PROMPT, tools=[get_user_location, get_weather_for_location], context_schema=Context, response_format=ToolStrategy(ResponseFormat), checkpointer=checkpointer ) # `thread_id` is a unique identifier for a given conversation. config = {"configurable": {"thread_id": "1"}} response = agent.invoke( {"messages": [{"role": "user", "content": "what is the weather outside?"}]}, config=config, context=Context(user_id="1") ) print(response['structured_response']) ``` -------------------------------- ### Define a Basic LangGraph 'Hello World' Agent Source: https://docs.langchain.com/oss/python/langgraph/overview This Python example demonstrates how to create a minimal 'hello world' agent using LangGraph's `StateGraph`. It defines a mock LLM function, sets up a graph with a single node, connects the start and end points, compiles the graph, and shows how to invoke it to get a response. ```python from langgraph.graph import StateGraph, MessagesState, START, END def mock_llm(state: MessagesState): return {"messages": [{"role": "ai", "content": "hello world"}]} graph = StateGraph(MessagesState) graph.add_node(mock_llm) graph.add_edge(START, "mock_llm") graph.add_edge("mock_llm", END) graph = graph.compile() graph.invoke({"messages": [{"role": "user", "content": "hi!"}]}) ``` -------------------------------- ### Initialize Deep Agent with Research Instructions Source: https://docs.langchain.com/oss/python/deepagents/quickstart Create a deep agent instance configured with research-focused system instructions and the internet search tool. The agent uses these instructions to guide its behavior as an expert researcher with access to web search capabilities. ```python # System prompt to steer the agent to be an expert researcher research_instructions = """You are an expert researcher. Your job is to conduct thorough research and then write a polished report. You have access to an internet search tool as your primary means of gathering information. ## `internet_search` Use this to run an internet search for a given query. You can specify the max number of results to return, the topic, and whether raw content should be included. """ agent = create_deep_agent( tools=[internet_search], system_prompt=research_instructions ) ``` -------------------------------- ### Create New Agent Chat UI Project (Bash) Source: https://docs.langchain.com/oss/python/langchain/ui This command-line snippet demonstrates how to initialize a new Agent Chat UI project using `npx create-agent-chat-app`. It then navigates into the newly created project directory, installs its dependencies with `pnpm`, and starts the local development server. This method is ideal for starting a fresh Agent Chat UI application from scratch. ```bash npx create-agent-chat-app --project-name my-chat-ui cd my-chat-ui pnpm install pnpm dev ``` -------------------------------- ### Start Milvus Server with Docker Source: https://docs.langchain.com/oss/python/integrations/vectorstores/milvus Download and execute the Milvus standalone Docker setup script to start a Milvus server instance. This enables deployment of a more performant Milvus server suitable for large-scale vector data (millions of vectors). ```bash !curl -sfL https://raw.githubusercontent.com/milvus-io/milvus/master/scripts/standalone_embed.sh -o standalone_embed.sh !bash standalone_embed.sh start ``` -------------------------------- ### Install Required Dependencies for LangChain RAG Example Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic Bash command to install the required Python packages for the end-to-end LangChain and LangGraph agentic RAG example. Installs langchain-openai for embeddings and chat models, and numpy for numerical operations. ```bash pip install langchain-openai numpy ``` -------------------------------- ### Configure Deep Agent with Skills using StateBackend, StoreBackend, or FilesystemBackend (Python) Source: https://docs.langchain.com/oss/python/deepagents/skills This set of Python examples demonstrates how to initialize a `deepagents` agent and provide it with skills using three different backend configurations. It covers the default `StateBackend` where skills are passed during invocation, `StoreBackend` which uses an in-memory store, and `FilesystemBackend` for loading skills from a local directory. Each example shows the necessary imports and agent initialization with skill paths and backend-specific configurations. ```python from urllib.request import urlopen from deepagents import create_deep_agent from deepagents.backends.utils import create_file_data from langgraph.checkpoint.memory import MemorySaver checkpointer = MemorySaver() skill_url = "https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/libs/cli/examples/skills/langgraph-docs/SKILL.md" with urlopen(skill_url) as response: skill_content = response.read().decode('utf-8') skills_files = { "/skills/langgraph-docs/SKILL.md": create_file_data(skill_content) } agent = create_deep_agent( skills=["./skills/"], checkpointer=checkpointer, ) result = agent.invoke( { "messages": [ { "role": "user", "content": "What is langgraph?" } ], "files": skills_files }, config={"configurable": {"thread_id": "12345"}}, ) ``` ```python from urllib.request import urlopen from deepagents import create_deep_agent from deepagents.backends import StoreBackend from deepagents.backends.utils import create_file_data from langgraph.store.memory import InMemoryStore store = InMemoryStore() skill_url = "https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/libs/cli/examples/skills/langgraph-docs/SKILL.md" with urlopen(skill_url) as response: skill_content = response.read().decode('utf-8') store.put( namespace=("filesystem",), key="/skills/langgraph-docs/SKILL.md", value=create_file_data(skill_content) ) agent = create_deep_agent( backend=(lambda rt: StoreBackend(rt)), store=store, skills=["/skills/"] ) result = agent.invoke( { "messages": [ { "role": "user", "content": "What is langgraph?" } ] }, config={"configurable": {"thread_id": "12345"}}, ) ``` ```python from deepagents import create_deep_agent from langgraph.checkpoint.memory import MemorySaver from deepagents.backends.filesystem import FilesystemBackend # Checkpointer is REQUIRED for human-in-the-loop checkpointer = MemorySaver() agent = create_deep_agent( backend=FilesystemBackend(root_dir="/Users/user/{project}"), skills=["/Users/user/{project}/skills/"], interrupt_on={ "write_file": True, # Default: approve, edit, reject "read_file": False, # No interrupts needed "edit_file": True # Default: approve, edit, reject }, checkpointer=checkpointer # Required! ) result = agent.invoke( { "messages": [ { "role": "user", "content": "What is langgraph?" } ] }, config={"configurable": {"thread_id": "12345"}}, ) ``` -------------------------------- ### Start SurrealDB Locally Source: https://docs.langchain.com/oss/python/integrations/vectorstores/surrealdb Start a SurrealDB instance locally using the surreal command-line tool with in-memory storage. Requires SurrealDB to be installed on your system. ```bash surreal start -u root -p root ``` -------------------------------- ### Define System Prompt for Weather Forecasting Agent Source: https://docs.langchain.com/oss/python/langchain/quickstart Creates a system prompt that defines the agent's role as a weather forecaster with access to specific tools. The prompt instructs the agent on how to use available tools to retrieve weather and user location information. This serves as the foundational instruction set for agent behavior. ```python SYSTEM_PROMPT = """You are an expert weather forecaster, who speaks in puns. You have access to two tools: - get_weather_for_location: use this to get the weather for a specific location - get_user_location: use this to get the user's location If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location.""" ``` -------------------------------- ### Instantiate Framework-Specific ChatOCIModelDeploymentVLLM for OCI Model Deployment in Python Source: https://docs.langchain.com/oss/python/integrations/chat/oci_data_science This example illustrates how to instantiate `ChatOCIModelDeploymentVLLM`, a framework-specific class tailored for OCI Model Deployment. This approach is suitable when working with particular frameworks (e.g., vLLM), enabling direct parameter passing through the constructor for a more streamlined setup process. ```python from langchain_community.chat_models import ChatOCIModelDeploymentVLLM # Create an instance of OCI Model Deployment Endpoint # Replace the endpoint uri with your own # Using framework specific class as entry point, you will # be able to pass model parameters in constructor. chat = ChatOCIModelDeploymentVLLM( endpoint="https://modeldeployment..oci.customer-oci.com//predict" ) ``` -------------------------------- ### Start Docker Container and Install ManticoreSearch Dependencies Source: https://docs.langchain.com/oss/python/integrations/vectorstores/manticore_search Python script to start a ManticoreSearch Docker container, wait for initialization, and install the manticore-columnar-lib package. This setup is required for vector search functionality in ManticoreSearch dev versions. The script uses Docker commands to manage the container and install system packages as root. ```python import time # Start container containers = !docker ps --filter "name=langchain-manticoresearch-server" -q if len(containers) == 0: !docker run -d -p 9308:9308 --name langchain-manticoresearch-server manticoresearch/manticore:dev time.sleep(20) # Wait for the container to start up # Get ID of container container_id = containers[0] # Install manticore-columnar-lib package as root user !docker exec -it --user 0 {container_id} apt-get update !docker exec -it --user 0 {container_id} apt-get install -y manticore-columnar-lib ``` -------------------------------- ### End-to-End Agentic RAG with LangChain Vector Store and LangGraph Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic Complete example demonstrating vector store setup with sample HR and benefits documents, a filtered retrieval tool that accepts category parameters, agent creation with Claude Haiku, and streaming agent invocation. Requires langchain-openai and numpy packages, and OPENAI_API_KEY environment variable. ```python from typing import Literal from langchain.chat_models import init_chat_model from langchain.embeddings import init_embeddings from langchain_core.documents import Document from langchain_core.vectorstores import InMemoryVectorStore from langgraph.checkpoint.memory import InMemorySaver from langchain.agents import create_agent # Set up vector store # Ensure you set your OPENAI_API_KEY environment variable embeddings = init_embeddings("openai:text-embedding-3-small") vector_store = InMemoryVectorStore(embeddings) document_1 = Document( id="1", page_content=( "To request vacation days, submit a leave request form through the " "HR portal. Approval will be sent by email." ), metadata={ "category": "HR Policy", "doc_title": "Leave Policy", "provenance": "Leave Policy - page 1", }, ) document_2 = Document( id="2", page_content="Managers will review vacation requests within 3 business days.", metadata={ "category": "HR Policy", "doc_title": "Leave Policy", "provenance": "Leave Policy - page 2", }, ) document_3 = Document( id="3", page_content=( "Employees with over 6 months tenure are eligible for 20 paid vacation days " "per year." ), metadata={ "category": "Benefits Policy", "doc_title": "Benefits Guide 2025", "provenance": "Benefits Policy - page 1", }, ) documents = [document_1, document_2, document_3] vector_store.add_documents(documents=documents) # Define tool async def retrieval_tool( query: str, category: Literal["HR Policy", "Benefits Policy"] ) -> list[dict]: """Access my knowledge base.""" def _filter_function(doc: Document) -> bool: return doc.metadata.get("category") == category results = vector_store.similarity_search( query=query, k=2, filter=_filter_function ) return [ { "type": "search_result", "title": doc.metadata["doc_title"], "source": doc.metadata["provenance"], "citations": {"enabled": True}, "content": [{"type": "text", "text": doc.page_content}], } for doc in results ] # Create agent model = init_chat_model("claude-haiku-4-5-20251001") checkpointer = InMemorySaver() agent = create_agent(model, [retrieval_tool], checkpointer=checkpointer) # Invoke on a query config = {"configurable": {"thread_id": "session_1"}} input_message = { "role": "user", "content": "How do I request vacation days?", } async for step in agent.astream( {"messages": [input_message]}, config, stream_mode="values", ): step["messages"][-1].pretty_print() ``` -------------------------------- ### Initialize WebSearchTool for Real-time Web Search (Python) Source: https://docs.langchain.com/oss/python/integrations/tools/writer This example demonstrates the initialization of the `WebSearchTool` to provide real-time web search capabilities to the AI assistant. It includes optional configuration to specify domains for inclusion and exclusion, allowing fine-grained control over search results and access to current information. ```python from langchain_writer.tools import WebSearchTool # Initialize the web search tool with optional configuration web_search_tool = WebSearchTool( include_domains=["wikipedia.org", "github.com", "techcrunch.com"], exclude_domains=["quora.com"] ) ``` -------------------------------- ### Create and Run a LangChain Agent in Python Source: https://docs.langchain.com/oss/python/langchain/overview This snippet demonstrates how to set up and execute a basic LangChain agent using Python. It covers installing required libraries, defining a custom tool, initializing an agent with a specific model and tools using `create_agent`, and invoking the agent with a user query to get a response. ```python # pip install -qU langchain "langchain[anthropic]" from langchain.agents import create_agent def get_weather(city: str) -> str: """Get weather for a given city.""" return f"It's always sunny in {city}!" agent = create_agent( model="claude-sonnet-4-5-20250929", tools=[get_weather], system_prompt="You are a helpful assistant", ) # Run the agent agent.invoke( {"messages": [{"role": "user", "content": "what is the weather in sf"}]} ) ``` -------------------------------- ### Invoke LangChain ChatAnthropic Model Synchronously in Python Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic This example demonstrates a synchronous invocation of the `ChatAnthropic` model. It prepares a list of messages (system and human) and uses the `model.invoke()` method to get a single, complete response from the model. ```python messages = [ ( "system", "You are a helpful translator. Translate the user sentence to French.", ), ( "human", "I love programming.", ), ] model.invoke(messages) ``` ```python print(ai_msg.text) ``` -------------------------------- ### Full Example: Implementing Filesystem-based Memory with Claude Agent in LangChain (Python) Source: https://docs.langchain.com/oss/python/integrations/middleware/anthropic This example demonstrates configuring a Claude agent with `FilesystemClaudeMemoryMiddleware` for persistent memory. It highlights the use of a temporary directory for demonstration purposes and the importance of a persistent directory in production environments for storing agent memory. ```python import tempfile from langchain_anthropic import ChatAnthropic from langchain_anthropic.middleware import FilesystemClaudeMemoryMiddleware from langchain.agents import create_agent # Create a temporary workspace directory for this demo. # In production, use a persistent directory path. ``` -------------------------------- ### Create Basic Agent with LangChain and Claude Source: https://docs.langchain.com/oss/python/langchain/quickstart Demonstrates how to create a simple AI agent using LangChain's create_agent function with Claude Sonnet 4.5 as the language model. The agent includes a weather tool and system prompt, then invokes it with a user query. Requires ANTHROPIC_API_KEY environment variable to be set. ```python from langchain.agents import create_agent def get_weather(city: str) -> str: """Get weather for a given city.""" return f"It's always sunny in {city}!" agent = create_agent( model="claude-sonnet-4-5-20250929", tools=[get_weather], system_prompt="You are a helpful assistant", ) # Run the agent agent.invoke( {"messages": [{"role": "user", "content": "what is the weather in sf"}]} ) ``` -------------------------------- ### Define Agent Entrypoint with Tool Calling Loop Source: https://docs.langchain.com/oss/python/langgraph/quickstart Create an entrypoint-decorated function that orchestrates the agent workflow. It calls the LLM, checks for tool calls, executes tools in parallel, and loops until the model stops requesting tools. Returns the final message list. ```python from langgraph.func import entrypoint from langgraph.graph import add_messages from langchain.messages import HumanMessage @entrypoint() def agent(messages: list[BaseMessage]): model_response = call_llm(messages).result() while True: if not model_response.tool_calls: break # Execute tools tool_result_futures = [ call_tool(tool_call) for tool_call in model_response.tool_calls ] tool_results = [fut.result() for fut in tool_result_futures] messages = add_messages(messages, [model_response, *tool_results]) model_response = call_llm(messages).result() messages = add_messages(messages, model_response) return messages # Invoke messages = [HumanMessage(content="Add 3 and 4.")] for chunk in agent.stream(messages, stream_mode="updates"): print(chunk) print("\n") ``` -------------------------------- ### Start of End-to-End Example in LangChain Python Source: https://docs.langchain.com/oss/python/integrations/document_loaders/docling This snippet marks the beginning of an end-to-end example for Docling integration within a LangChain application. It starts by importing the `os` module, which is commonly used for interacting with the operating system, such as managing file paths or environment variables. ```python import os ``` -------------------------------- ### Example Output: Anthropic Memory Tool Call Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic This Python snippet shows an example of the structured output from a `ChatAnthropic` model after an initial human message. It illustrates how the model responds with both text and a `tool_call` to the `memory` tool, specifying the command and path for memory retrieval. ```python [{'type': 'text', 'text': "I'll check my memory to see what information I have about your interests."}, {'type': 'tool_call', 'name': 'memory', 'args': {'command': 'view', 'path': '/memories'}, 'id': 'toolu_01XeP9sxx44rcZHFNqXSaKqh'}] ``` -------------------------------- ### Install LangChain Text Splitters for Anthropic Integration (Bash) Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic This command installs the `langchain-text-splitters` Python package, which is essential for using LangChain's text splitting functionalities. These splitters are used to prepare content into manageable chunks before sending them to Anthropic models as custom document types. ```bash pip install langchain-text-splitters ``` -------------------------------- ### Install required Python packages for OpenCLIP and SingleStore integration Source: https://docs.langchain.com/oss/python/integrations/vectorstores/singlestore This snippet provides the necessary `pip install` command to set up the Python environment. It installs `langchain`, `openai`, `langchain-singlestore`, and `langchain-experimental` to enable multi-modal embedding and vector store functionalities. ```python pip install -U langchain openai lanchain-singlestore langchain-experimental ``` -------------------------------- ### Create Tools with Runtime Context in Python Source: https://docs.langchain.com/oss/python/langchain/quickstart Defines two tools for the agent: `get_weather_for_location` to fetch weather data and `get_user_location` to retrieve user information using runtime context. The `get_user_location` tool demonstrates how to access custom runtime context through the `ToolRuntime` parameter, enabling context-aware tool behavior. ```python from dataclasses import dataclass from langchain.tools import tool, ToolRuntime @tool def get_weather_for_location(city: str) -> str: """Get weather for a given city.""" return f"It's always sunny in {city}!" @dataclass class Context: """Custom runtime context schema.""" user_id: str @tool def get_user_location(runtime: ToolRuntime[Context]) -> str: """Retrieve user information based on user ID.""" user_id = runtime.context.user_id return "Florida" if user_id == "1" else "SF" ``` -------------------------------- ### Example Output of Claude Extended Thinking (JSON) Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic This JSON snippet provides an example of the structured output received when Claude's extended thinking feature is enabled. It shows how the response is broken into `reasoning` and `text` content blocks, detailing the step-by-step thought process and the final answer. ```json [ { "type": "reasoning", "reasoning": "To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653.\n\nI can try to estimate this first. \n$3^3 = 27$\n$4^3 = 64$\n\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\n\nLet me try to compute this more precisely. I can use the cube root function:\n\ncube root of 50.653 = 50.653^(1/3)\n\nLet me calculate this:\n50.653^(1/3) \u2248 3.6998\n\nLet me verify:\n3.6998^3 \u2248 50.6533\n\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\n\nActually, let me compute this more precisely:\n50.653^(1/3) \u2248 3.69981\n\nLet me verify once more:\n3.69981^3 \u2248 50.652998\n\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.", "extras": {"signature": "ErUBCkYIBxgCIkB0UjV..."} }, { "type": "text", "text": "The cube root of 50.653 is approximately 3.6998.\n\nTo verify: 3.6998\u00b3 = 50.6530, which is very close to our original number." } ] ``` -------------------------------- ### Initialize SQL Agent with LangChain Python Source: https://docs.langchain.com/oss/python/langchain/sql-agent Create a sql_agent.py file that initializes a GPT-4 LLM, downloads a SQLite database, creates SQL tools via SQLDatabaseToolkit, and sets up an agent with a system prompt for safe SQL query generation and execution. ```python #sql_agent.py for studio import pathlib from langchain.agents import create_agent from langchain.chat_models import init_chat_model from langchain_community.agent_toolkits import SQLDatabaseToolkit from langchain_community.utilities import SQLDatabase import requests # Initialize an LLM model = init_chat_model("gpt-4.1") # Get the database, store it locally url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db" local_path = pathlib.Path("Chinook.db") if local_path.exists(): print(f"{local_path} already exists, skipping download.") else: response = requests.get(url) if response.status_code == 200: local_path.write_bytes(response.content) print(f"File downloaded and saved as {local_path}") else: print(f"Failed to download the file. Status code: {response.status_code}") db = SQLDatabase.from_uri("sqlite:///Chinook.db") # Create the tools toolkit = SQLDatabaseToolkit(db=db, llm=model) tools = toolkit.get_tools() for tool in tools: print(f"{tool.name}: {tool.description}\n") # Use create_agent system_prompt = """ You are an agent designed to interact with a SQL database. Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for the relevant columns given the question. You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again. DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database. To start you should ALWAYS look at the tables in the database to see what you can query. Do NOT skip this step. Then you should query the schema of the most relevant tables. """.format( dialect=db.dialect, top_k=5, ) agent = create_agent( model, tools, system_prompt=system_prompt, ) ``` -------------------------------- ### Example Output from ChatQwen Invocation Source: https://docs.langchain.com/oss/python/integrations/chat/qwen This text snippet shows an example of the `AIMessage` object returned by the `ChatQwen` model after an invocation. It includes the translated content, metadata about the response, and token usage statistics. ```text AIMessage(content="J'adore la programmation.", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--40f2e75b-7d28-4a71-8f5f-561509ac2010-0', usage_metadata={'input_tokens': 32, 'output_tokens': 8, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}}) ``` -------------------------------- ### Get language-specific separators for text splitting Source: https://docs.langchain.com/oss/python/integrations/splitters/code_splitter Retrieve the list of separators used by RecursiveCharacterTextSplitter for a specific language. Example shows Python separators which include class definitions, function definitions, and various whitespace patterns. ```python RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON) ``` -------------------------------- ### Set Up SQLDatabaseToolkit for Agent Interactions Source: https://docs.langchain.com/oss/python/langchain/sql-agent Initializes the SQLDatabaseToolkit with the database and language model, then retrieves and displays all available tools. These tools enable the agent to query the database, inspect schemas, list tables, and validate SQL queries before execution. ```python from langchain_community.agent_toolkits import SQLDatabaseToolkit toolkit = SQLDatabaseToolkit(db=db, llm=model) tools = toolkit.get_tools() for tool in tools: print(f"{tool.name}: {tool.description}\n") ``` -------------------------------- ### Configure Code Execution Tool with Dictionary Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic Bind code execution tool using a simple dictionary configuration instead of Anthropic type objects. This lightweight approach provides the same functionality with minimal setup. ```python from langchain_anthropic import ChatAnthropic model = ChatAnthropic( model="claude-sonnet-4-5-20250929", ) code_tool = {"type": "code_execution_20250825", "name": "code_execution"} model_with_tools = model.bind_tools([code_tool]) response = model_with_tools.invoke( "Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]" ) ``` -------------------------------- ### Configure LangChain PowerBI Toolkit with Few-Shot Prompts in Python Source: https://docs.langchain.com/oss/python/integrations/tools/powerbi Shows how to enhance the `PowerBIToolkit` by providing custom few-shot DAX examples, guiding the LLM to generate more accurate queries for specific question patterns. This re-initializes the toolkit and agent with the new examples. ```python few_shots = """ Question: How many rows are in the table revenue? DAX: EVALUATE ROW("Number of rows", COUNTROWS(revenue_details)) ---- Question: How many rows are in the table revenue where year is not empty? DAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(revenue_details, revenue_details[year] <> ""))) ---- Question: What was the average of value in revenue in dollars? DAX: EVALUATE ROW("Average", AVERAGE(revenue_details[dollar_value])) ---- """ toolkit = PowerBIToolkit( powerbi=PowerBIDataset( dataset_id="", table_names=["table1", "table2"], credential=DefaultAzureCredential(), ), llm=smart_llm, examples=few_shots, ) agent_executor = create_pbi_agent( llm=fast_llm, toolkit=toolkit, verbose=True, ) ``` -------------------------------- ### Asynchronously Invoke LangChain ChatAnthropic Model in Python Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic This example provides various asynchronous invocation methods for the `ChatAnthropic` model. It includes `ainvoke` for single asynchronous calls, `astream` for asynchronous streaming, and `abatch` for processing multiple requests concurrently. ```python await model.ainvoke(messages) # stream async for chunk in (await model.astream(messages)) # batch await model.abatch([messages]) ``` -------------------------------- ### Initialize SingleStoreVectorStore with embeddings Source: https://docs.langchain.com/oss/python/integrations/vectorstores/singlestore Sets up a SingleStoreVectorStore instance by configuring the database connection URL via environment variable and initializing the vector store with embeddings. The connection URL follows the format 'user:password@host:port/database'. ```python import os from langchain_singlestore.vectorstores import SingleStoreVectorStore os.environ["SINGLESTOREDB_URL"] = "root:pass@localhost:3306/db" vector_store = SingleStoreVectorStore(embeddings=embeddings) ``` -------------------------------- ### Create and Prepare Documents for Vector Store Source: https://docs.langchain.com/oss/python/integrations/vectorstores/qdrant Demonstrates creating Document objects with page content and metadata, then preparing them with unique IDs for addition to the vector store. Shows typical document structure with source metadata. ```python from uuid import uuid4 from langchain_core.documents import Document document_1 = Document( page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.", metadata={"source": "tweet"}, ) document_2 = Document( page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees Fahrenheit.", metadata={"source": "news"}, ) document_3 = Document( page_content="Building an exciting new project with LangChain - come check it out!", metadata={"source": "tweet"}, ) document_4 = Document( page_content="Robbers broke into the city bank and stole $1 million in cash.", metadata={"source": "news"}, ) document_5 = Document( page_content="Wow! That was an amazing movie. I can't wait to see it again.", metadata={"source": "tweet"}, ) document_6 = Document( page_content="Is the new iPhone worth the price? Read this review to find out.", metadata={"source": "website"}, ) document_7 = Document( page_content="The top 10 soccer players in the world right now.", metadata={"source": "website"}, ) document_8 = Document( page_content="LangGraph is the best framework for building stateful, agentic applications!", metadata={"source": "tweet"}, ) document_9 = Document( page_content="The stock market is down 500 points today due to fears of a recession.", metadata={"source": "news"}, ) document_10 = Document( page_content="I have a bad feeling I am going to get deleted :(", metadata={"source": "tweet"}, ) documents = [ document_1, document_2, document_3, document_4, document_5, document_6, document_7, document_8, document_9, document_10, ] uuids = [str(uuid4()) for _ in range(len(documents))] ``` -------------------------------- ### Example Anthropic Model Tool Call Output Structure Source: https://docs.langchain.com/oss/python/integrations/chat/anthropic This snippet provides an example of the structured output received from an Anthropic model when it decides to use a tool. It shows a list containing a text message and a `tool_call` object, detailing the tool's name (`computer`), the action (`screenshot`) it intends to perform, and a unique ID. This output needs to be parsed and executed by the application. ```python [{'type': 'text', 'text': "I'll take a screenshot to see what's currently on the screen."}, {'type': 'tool_call', 'name': 'computer', 'args': {'action': 'screenshot'}, 'id': 'toolu_01RNsqAE7dDZujELtacNeYv9'}] ``` -------------------------------- ### Quickstart LangChain Agent with Automatic LangSmith Tracing in Python Source: https://docs.langchain.com/oss/python/langchain/observability This Python example demonstrates how to create a simple LangChain agent with tools and invoke it. When LangSmith tracing is enabled via environment variables, all agent steps, tool calls, and model interactions are automatically traced without additional code. ```python from langchain.agents import create_agent def send_email(to: str, subject: str, body: str): """Send an email to a recipient.""" # ... email sending logic return f"Email sent to {to}" def search_web(query: str): """Search the web for information.""" # ... web search logic return f"Search results for: {query}" agent = create_agent( model="gpt-4.1", tools=[send_email, search_web], system_prompt="You are a helpful assistant that can send emails and search the web." ) # Run the agent - all steps will be traced automatically response = agent.invoke({ "messages": [{"role": "user", "content": "Search for the latest AI news and email a summary to john@example.com"}] }) ``` -------------------------------- ### Integrate HuggingFace Chat Model with LangChain Source: https://docs.langchain.com/oss/python/langchain/multi-agent/subagents-personal-assistant This snippet explains how to install the LangChain HuggingFace integration, set the HuggingFace API token, and initialize a HuggingFace chat model. It includes examples for `init_chat_model` with specific parameters and the start of a direct `ChatHuggingFace` class instantiation using `HuggingFaceEndpoint`. ```shell pip install -U "langchain[huggingface]" ``` ```python import os from langchain.chat_models import init_chat_model os.environ["HUGGINGFACEHUB_API_TOKEN"] = "hf_..." model = init_chat_model( "microsoft/Phi-3-mini-4k-instruct", model_provider="huggingface", temperature=0.7, max_tokens=1024, ) ``` ```python import os from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint os.environ["HUGGINGFACEHUB_API_TOKEN"] = "hf_..." llm = HuggingFaceEndpoint( ``` -------------------------------- ### Initialize DeepAgent with Daytona Sandbox Source: https://docs.langchain.com/oss/python/deepagents/sandboxes Demonstrates how to set up a `deepagents` agent using `DaytonaSandbox` as the backend. This involves creating a Daytona sandbox instance and then invoking the agent to perform a task, followed by stopping the sandbox. ```python from daytona import Daytona from langchain_anthropic import ChatAnthropic from deepagents import create_deep_agent from langchain_daytona import DaytonaSandbox sandbox = Daytona().create() backend = DaytonaSandbox(sandbox=sandbox) agent = create_deep_agent( model=ChatAnthropic(model="claude-sonnet-4-20250514"), system_prompt="You are a Python coding assistant with sandbox access.", backend=backend, ) result = agent.invoke( { "messages": [ { "role": "user", "content": "Create a small Python package and run pytest", } ] } ) sandbox.stop() ``` -------------------------------- ### Initialize DeepAgent with Modal Sandbox Source: https://docs.langchain.com/oss/python/deepagents/sandboxes Demonstrates how to set up a `deepagents` agent using `ModalSandbox` as the backend. This involves looking up a Modal app, creating a sandbox instance, and then invoking the agent to perform a task, followed by sandbox termination. ```python import modal from langchain_anthropic import ChatAnthropic from deepagents import create_deep_agent from langchain_modal import ModalSandbox app = modal.App.lookup("your-app") modal_sandbox = modal.Sandbox.create(app=app) backend = ModalSandbox(sandbox=modal_sandbox) agent = create_deep_agent( model=ChatAnthropic(model="claude-sonnet-4-20250514"), system_prompt="You are a Python coding assistant with sandbox access.", backend=backend, ) result = agent.invoke( { "messages": [ { "role": "user", "content": "Create a small Python package and run pytest", } ] } ) modal_sandbox.terminate() ```