Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
Deep Agents
https://github.com/langchain-ai/deepagents
Admin
Deep Agents is a Python package that implements a general-purpose architecture for creating advanced
...
Tokens:
82,008
Snippets:
542
Trust Score:
9.2
Update:
1 week ago
Context
Skills
Chat
Benchmark
82.9
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Deep Agents Deep Agents is a batteries-included agent harness built on LangGraph that provides a fully functional AI agent out of the box. Instead of wiring up prompts, tools, and context management yourself, you get a working agent immediately with built-in planning (`write_todos`), filesystem operations (`read_file`, `write_file`, `edit_file`, `ls`, `glob`, `grep`), shell execution (`execute`), and sub-agent delegation (`task`). The harness includes smart defaults with prompts that teach the model how to use tools effectively, plus automatic context management with summarization for long conversations. The project consists of two main components: the **deepagents SDK** (Python library for building custom agents) and **deepagents-cli** (a pre-built terminal coding agent similar to Claude Code or Cursor). The SDK works with any LLM that supports tool calling including Anthropic Claude, OpenAI GPT, Google Gemini, and open models. The CLI adds an interactive TUI, web search, remote sandboxes, persistent memory, custom skills, and human-in-the-loop approval workflows. ## create_deep_agent - Create a configured Deep Agent The main entry point for constructing a fully configured Deep Agent with planning, filesystem, subagent, and summarization middleware. Returns a compiled LangGraph graph that can be used with streaming, checkpointers, and all LangGraph features. ```python from deepagents import create_deep_agent from langchain.chat_models import init_chat_model # Basic usage with default Anthropic Claude model agent = create_deep_agent() result = agent.invoke({"messages": [{"role": "user", "content": "Research LangGraph and write a summary"}]}) # Using a specific model provider agent = create_deep_agent( model="openai:gpt-4o", system_prompt="You are a research assistant specializing in AI topics.", ) # With custom tools and model from langchain_core.tools import tool @tool def search_database(query: str) -> str: """Search the internal database for information.""" return f"Results for: {query}" model = init_chat_model("anthropic:claude-sonnet-4-6") agent = create_deep_agent( model=model, tools=[search_database], system_prompt="You are a data analyst with database access.", ) # Invoke the agent result = agent.invoke({ "messages": [{"role": "user", "content": "Find all users created in the last week"}] }) print(result["messages"][-1].content) # Stream responses for chunk in agent.stream({"messages": [{"role": "user", "content": "Explain quantum computing"}]}): print(chunk, end="", flush=True) ``` ## SubAgent - Define synchronous subagents for task delegation SubAgents are declarative specs that define specialized agents for handling isolated, multi-step tasks. They inherit the parent agent's tools by default and receive their own middleware stack with planning, filesystem, and summarization capabilities. ```python from deepagents import create_deep_agent, SubAgent from langchain_core.tools import tool @tool def web_search(query: str) -> str: """Search the web for information.""" return f"Search results for: {query}" @tool def analyze_sentiment(text: str) -> str: """Analyze sentiment of text.""" return "positive" if "good" in text.lower() else "neutral" # Define specialized subagents researcher: SubAgent = { "name": "researcher", "description": "Research agent for gathering information from the web and analyzing topics in depth.", "system_prompt": "You are a research specialist. Gather comprehensive information and synthesize findings.", "tools": [web_search], } analyst: SubAgent = { "name": "sentiment-analyst", "description": "Analyzes text sentiment and emotional tone for content evaluation.", "system_prompt": "You analyze text for sentiment. Return detailed sentiment analysis.", "tools": [analyze_sentiment], } # Create agent with subagents agent = create_deep_agent( model="anthropic:claude-sonnet-4-6", subagents=[researcher, analyst], system_prompt="You orchestrate research and analysis tasks by delegating to specialized subagents.", ) # The agent can now use the `task` tool to delegate work result = agent.invoke({ "messages": [{"role": "user", "content": "Research AI safety and analyze the sentiment of recent publications"}] }) ``` ## AsyncSubAgent - Define remote asynchronous subagents AsyncSubAgents connect to Agent Protocol-compliant servers (LangGraph Platform or self-hosted) and run as background tasks. The main agent can monitor progress and send updates while the subagent works. ```python from deepagents import create_deep_agent, AsyncSubAgent # Define async subagent pointing to a remote deployment remote_researcher: AsyncSubAgent = { "name": "deep-researcher", "description": "Performs intensive research tasks in the background on a dedicated server.", "graph_id": "research-agent", # Graph name on the remote server "url": "https://my-langgraph-deployment.langchain.app", # Optional: defaults to LangGraph SDK default } # Create agent with async subagent agent = create_deep_agent( model="anthropic:claude-sonnet-4-6", subagents=[remote_researcher], ) # The agent gains tools for managing async tasks: # - launch_task: Start a background task # - check_task: Check status of a running task # - update_task: Send updates to a running task # - cancel_task: Cancel a running task # - list_tasks: List all active tasks result = agent.invoke({ "messages": [{"role": "user", "content": "Start a deep research task on quantum computing trends"}] }) ``` ## FilesystemMiddleware - Add filesystem tools to agents Provides `ls`, `read_file`, `write_file`, `edit_file`, `glob`, and `grep` tools. If the backend implements `SandboxBackendProtocol`, an `execute` tool is also added for shell commands. ```python from deepagents.middleware.filesystem import FilesystemMiddleware from deepagents.backends import StateBackend, CompositeBackend from langchain.agents import create_agent # Using StateBackend (ephemeral, in-memory storage) middleware = FilesystemMiddleware(backend=StateBackend()) # Using CompositeBackend for hybrid storage from deepagents.backends.store import StoreBackend backend = CompositeBackend( default=StateBackend(), routes={"/memories/": StoreBackend()} # Persistent storage for memories ) middleware = FilesystemMiddleware(backend=backend) # Create agent with filesystem middleware agent = create_agent( model="anthropic:claude-sonnet-4-6", middleware=[middleware], ) # The agent can now use filesystem tools: # - ls(path="/workspace") - List directory contents # - read_file(file_path="/workspace/main.py", offset=0, limit=100) # - write_file(file_path="/workspace/output.txt", content="Hello") # - edit_file(file_path="/workspace/main.py", old_string="foo", new_string="bar") # - glob(pattern="**/*.py", path="/workspace") # - grep(pattern="TODO", path="/workspace", glob="*.py") result = agent.invoke({ "messages": [{"role": "user", "content": "Read the main.py file and list all TODO comments"}], "files": {"/workspace/main.py": {"content": "# TODO: implement feature\ndef main():\n pass", "encoding": "utf-8"}} }) ``` ## FilesystemPermission - Control filesystem access Define access rules for filesystem operations. Rules are evaluated in declaration order with first-match-wins semantics. ```python from deepagents import create_deep_agent from deepagents.middleware.permissions import FilesystemPermission # Define permission rules permissions = [ # Allow reads anywhere under /workspace FilesystemPermission( operations=["read"], paths=["/workspace/**"], mode="allow" ), # Allow writes only to specific directories FilesystemPermission( operations=["write"], paths=["/workspace/output/**", "/tmp/**"], mode="allow" ), # Deny all other writes (catch-all) FilesystemPermission( operations=["write"], paths=["/**"], mode="deny" ), ] agent = create_deep_agent( model="anthropic:claude-sonnet-4-6", permissions=permissions, ) # Agent can read from /workspace but can only write to /workspace/output or /tmp result = agent.invoke({ "messages": [{"role": "user", "content": "Read config.yaml and save a backup to output/"}] }) ``` ## MemoryMiddleware - Add persistent memory from AGENTS.md files Loads memory files at agent startup and injects their contents into the system prompt. This enables persistent context across conversations. ```python from deepagents import create_deep_agent # Memory files are loaded from the backend and added to the system prompt agent = create_deep_agent( model="anthropic:claude-sonnet-4-6", memory=["/workspace/AGENTS.md", "/memories/project-context.md"], ) # When invoking, provide the memory files in the state result = agent.invoke({ "messages": [{"role": "user", "content": "What are the project conventions?"}], "files": { "/workspace/AGENTS.md": { "content": "# Project Guidelines\n\n- Use TypeScript for all new code\n- Follow the existing naming conventions\n- Write tests for all new features", "encoding": "utf-8" } } }) ``` ## Human-in-the-Loop with interrupt_on Configure tool calls that require human approval before execution. Requires a checkpointer for state persistence. ```python from deepagents import create_deep_agent from langgraph.checkpoint.memory import MemorySaver # Create agent with human-in-the-loop for write operations agent = create_deep_agent( model="anthropic:claude-sonnet-4-6", interrupt_on={ "write_file": True, # Interrupt before any file write "edit_file": True, # Interrupt before any file edit "execute": {"always": True}, # Always interrupt before shell commands }, checkpointer=MemorySaver(), ) # First invocation - will pause at tool calls requiring approval config = {"configurable": {"thread_id": "my-session"}} result = agent.invoke( {"messages": [{"role": "user", "content": "Create a new Python script that prints hello world"}]}, config=config ) # Check if interrupted if result.get("__interrupt__"): print("Agent wants to execute:", result["__interrupt__"]) # To approve and continue: # result = agent.invoke(None, config=config) ``` ## BackendProtocol - Implement custom storage backends Create custom backends for file storage by implementing the BackendProtocol interface. For execution support, implement SandboxBackendProtocol. ```python from deepagents.backends.protocol import ( BackendProtocol, SandboxBackendProtocol, ReadResult, WriteResult, EditResult, LsResult, GlobResult, GrepResult, ExecuteResponse, FileData, FileInfo, ) class MyCloudBackend(BackendProtocol): """Custom backend storing files in cloud storage.""" def __init__(self, bucket: str): self.bucket = bucket def ls(self, path: str) -> LsResult: # List files from cloud storage files = self._list_cloud_files(path) return LsResult(entries=[{"path": f} for f in files]) def read(self, file_path: str, offset: int = 0, limit: int = 2000) -> ReadResult: content = self._download_from_cloud(file_path) if content is None: return ReadResult(error="file_not_found") return ReadResult(file_data={"content": content, "encoding": "utf-8"}) def write(self, file_path: str, content: str) -> WriteResult: self._upload_to_cloud(file_path, content) return WriteResult(path=file_path) def edit(self, file_path: str, old_string: str, new_string: str, replace_all: bool = False) -> EditResult: content = self._download_from_cloud(file_path) if old_string not in content: return EditResult(error=f"String not found: {old_string}") if replace_all: new_content = content.replace(old_string, new_string) count = content.count(old_string) else: new_content = content.replace(old_string, new_string, 1) count = 1 self._upload_to_cloud(file_path, new_content) return EditResult(path=file_path, occurrences=count) def glob(self, pattern: str, path: str = "/") -> GlobResult: matches = self._glob_cloud_files(pattern, path) return GlobResult(matches=[{"path": m} for m in matches]) def grep(self, pattern: str, path: str = None, glob: str = None) -> GrepResult: matches = self._search_cloud_files(pattern, path, glob) return GrepResult(matches=matches) # Use the custom backend from deepagents import create_deep_agent agent = create_deep_agent( model="anthropic:claude-sonnet-4-6", backend=MyCloudBackend(bucket="my-bucket"), ) ``` ## CLI Installation and Usage The Deep Agents CLI provides a pre-built terminal coding agent with an interactive TUI. ```bash # Quick install via script curl -LsSf https://raw.githubusercontent.com/langchain-ai/deepagents/main/libs/cli/scripts/install.sh | bash # Install with additional model providers DEEPAGENTS_EXTRAS="nvidia,ollama" curl -LsSf https://raw.githubusercontent.com/langchain-ai/deepagents/main/libs/cli/scripts/install.sh | bash # Or install directly with uv uv tool install 'deepagents-cli[nvidia,ollama]' # Run the interactive CLI deepagents # Run in headless mode for scripting deepagents --headless "Create a Python script that fetches weather data" # Resume a previous conversation deepagents --resume # Specify a different model deepagents --model openai:gpt-4o ``` ## Skills - Extend the CLI with custom slash commands Skills are custom extensions loaded from the filesystem that add new capabilities to the agent. ```python # File: /skills/my-skill/SKILL.md """ name: analyze-code description: Analyze code for potential issues and improvements When the user invokes /analyze-code, perform a comprehensive code review: 1. Read the specified file or directory 2. Check for common issues (unused imports, code smells, security concerns) 3. Suggest improvements with specific code examples 4. Output a structured report Example usage: /analyze-code src/main.py """ # Use the skill in the CLI # > /analyze-code src/main.py ``` ```python # Programmatic skill configuration from deepagents import create_deep_agent agent = create_deep_agent( model="anthropic:claude-sonnet-4-6", skills=["/skills/user/", "/skills/project/"], # Skill source paths ) ``` ## Research Agent Example with Custom Subagents A complete example showing a multi-step research agent with web search and strategic thinking tools. ```python from datetime import datetime from langchain.chat_models import init_chat_model from langchain_core.tools import tool from deepagents import create_deep_agent @tool def tavily_search(query: str) -> str: """Search the web using Tavily API for current information.""" # In production, use actual Tavily client return f"Search results for: {query}" @tool def think(thought: str) -> str: """Record a strategic thought or reflection about the research process.""" return f"Recorded thought: {thought}" current_date = datetime.now().strftime("%Y-%m-%d") # Research subagent for deep topic exploration research_sub_agent = { "name": "research-agent", "description": "Delegate research to this agent. Give it one topic at a time for thorough investigation.", "system_prompt": f"""You are a research specialist. Today's date is {current_date}. Your job is to thoroughly research the given topic using web searches and strategic thinking. - Use tavily_search to gather current information - Use think to record insights and plan next steps - Synthesize findings into a clear, comprehensive summary - Include citations and sources where available""", "tools": [tavily_search, think], } # Main orchestrator agent ORCHESTRATOR_PROMPT = """You are a research orchestrator managing complex research projects. When given a research task: 1. Break it down into specific research questions 2. Delegate each question to the research-agent subagent 3. Launch multiple research agents in parallel when questions are independent 4. Synthesize all findings into a comprehensive final report Use the task tool to delegate research work. Maximum 3 concurrent research tasks.""" model = init_chat_model("anthropic:claude-sonnet-4-6", temperature=0.0) agent = create_deep_agent( model=model, tools=[tavily_search, think], system_prompt=ORCHESTRATOR_PROMPT, subagents=[research_sub_agent], ) # Run a research task result = agent.invoke({ "messages": [{ "role": "user", "content": "Research the current state of large language models, focusing on recent advances in reasoning capabilities and multimodal understanding." }] }) print(result["messages"][-1].content) ``` ## Summary Deep Agents excels at building autonomous coding assistants, research agents, and multi-step task automation systems. The primary use cases include: interactive coding assistance (via the CLI), automated code review and refactoring, research and information synthesis with web search, document processing and analysis, and orchestrating complex multi-agent workflows. The framework is particularly well-suited for scenarios requiring file system operations, shell command execution, and hierarchical task delegation. Integration patterns focus on the LangGraph ecosystem, allowing Deep Agents to work seamlessly with LangSmith for observability, checkpointers for state persistence, and LangGraph Platform for deployment. Custom integrations are supported through the BackendProtocol interface for storage, custom tools via standard LangChain tool decorators, and middleware for cross-cutting concerns. The modular architecture allows starting with `create_deep_agent()` for quick prototypes and progressively customizing models, tools, subagents, permissions, and backends as requirements evolve.