Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
Deep Agents UI
https://github.com/langchain-ai/deep-agents-ui
Admin
A user interface designed to work with LangChain's 'deep-agents' package, enabling users to interact
...
Tokens:
4,177
Snippets:
45
Trust Score:
9.2
Update:
1 week ago
Context
Skills
Chat
Benchmark
63.3
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Deep Agents UI Deep Agents UI is a React/Next.js web application that provides a chat interface for interacting with LangGraph-powered AI agents. It connects to deployed LangGraph services and enables real-time streaming conversations with agents that can execute tasks, delegate to sub-agents, access files, and maintain conversation threads with full state management. The application features a resizable panel layout with thread history management, task tracking with todo lists, file state viewing, tool call visualization, and interrupt handling for human-in-the-loop workflows. It integrates with LangSmith for authentication and uses the LangGraph SDK for all agent communication, supporting both local development (via graph names) and deployed production environments (via UUIDs). ## Configuration Management The `StandaloneConfig` interface and configuration utilities manage deployment settings stored in browser localStorage, handling deployment URL, assistant ID, and optional LangSmith API key. ```typescript import { getConfig, saveConfig, StandaloneConfig } from "@/lib/config"; // Configuration interface interface StandaloneConfig { deploymentUrl: string; // e.g., "http://127.0.0.1:2024" or "https://my-deployment.langchain.com" assistantId: string; // UUID for deployed graphs or graph name for local dev (e.g., "research") langsmithApiKey?: string; // Optional: "lsv2_pt_..." format } // Retrieve saved configuration from localStorage const config = getConfig(); if (config) { console.log("Deployment URL:", config.deploymentUrl); console.log("Assistant ID:", config.assistantId); } // Save new configuration saveConfig({ deploymentUrl: "http://127.0.0.1:2024", assistantId: "research", langsmithApiKey: "lsv2_pt_abc123..." }); ``` ## ClientProvider - LangGraph SDK Client Setup The `ClientProvider` creates and provides a configured LangGraph SDK client throughout the application, handling API authentication and base URL configuration. ```typescript import { ClientProvider, useClient } from "@/providers/ClientProvider"; import { Client } from "@langchain/langgraph-sdk"; // Wrap your app with ClientProvider function App() { return ( <ClientProvider deploymentUrl="http://127.0.0.1:2024" apiKey="lsv2_pt_..." > <MyComponent /> </ClientProvider> ); } // Access the client in child components function MyComponent() { const client = useClient(); // Use the client for LangGraph operations const fetchAssistants = async () => { const assistants = await client.assistants.search({ graphId: "research", limit: 100 }); return assistants; }; // Get a specific assistant by UUID const getAssistant = async (id: string) => { const assistant = await client.assistants.get(id); return assistant; }; } ``` ## ChatProvider and useChat Hook The `ChatProvider` and `useChat` hook manage chat state, message streaming, and agent interactions using the LangGraph SDK's streaming capabilities. ```typescript import { ChatProvider, useChatContext } from "@/providers/ChatProvider"; import { useChat, StateType } from "@/app/hooks/useChat"; import { Assistant } from "@langchain/langgraph-sdk"; // StateType defines the agent's state structure interface StateType { messages: Message[]; todos: TodoItem[]; files: Record<string, string>; email?: { id?: string; subject?: string; page_content?: string }; ui?: any; } // Wrap chat interface with provider function ChatWrapper({ assistant }: { assistant: Assistant }) { return ( <ChatProvider activeAssistant={assistant} onHistoryRevalidate={() => console.log("Thread list updated")} > <ChatInterface /> </ChatProvider> ); } // Use chat context in components function ChatInterface() { const { messages, // Current conversation messages todos, // Agent's task list files, // Files in agent state isLoading, // True while streaming isThreadLoading, // True while loading thread history interrupt, // Current interrupt data (for human-in-the-loop) sendMessage, // Send a new user message stopStream, // Stop the current stream continueStream, // Continue after interrupt resumeInterrupt, // Resume with a specific value markCurrentThreadAsResolved, setFiles, // Update files in state } = useChatContext(); // Send a message to the agent const handleSend = () => { sendMessage("Analyze the sales data and create a report"); }; // Stop the running agent const handleStop = () => { stopStream(); }; // Resume from an interrupt with approval const handleApprove = () => { resumeInterrupt({ approved: true, feedback: "Looks good!" }); }; // Continue streaming after tool execution const handleContinue = () => { continueStream(/* hasTaskToolCall */ false); }; } ``` ## useThreads Hook - Thread History Management The `useThreads` hook provides paginated thread history with infinite scroll support, filtering by status, and automatic thread grouping. ```typescript import { useThreads, ThreadItem } from "@/app/hooks/useThreads"; // ThreadItem structure interface ThreadItem { id: string; updatedAt: Date; status: "idle" | "busy" | "interrupted" | "error"; title: string; // First human message (truncated) description: string; // First AI response (truncated) assistantId?: string; } function ThreadListComponent() { // Fetch threads with optional status filter const { data, error, isLoading, size, setSize, mutate } = useThreads({ status: "interrupted", // Filter: "idle" | "busy" | "interrupted" | "error" | undefined limit: 20 // Page size }); // Flatten paginated data const threads = data?.flat() ?? []; // Load more pages const loadMore = () => setSize(size + 1); // Force refresh thread list const refresh = () => mutate(); // Group threads by time const groupedByDate = { today: threads.filter(t => isToday(t.updatedAt)), yesterday: threads.filter(t => isYesterday(t.updatedAt)), older: threads.filter(t => isOlder(t.updatedAt)) }; return ( <div> {threads.map(thread => ( <div key={thread.id} onClick={() => selectThread(thread.id)}> <h3>{thread.title}</h3> <p>{thread.description}</p> <span className={`status-${thread.status}`}>{thread.status}</span> </div> ))} <button onClick={loadMore}>Load More</button> </div> ); } ``` ## ChatMessage Component - Message Rendering The `ChatMessage` component renders conversation messages with support for tool calls, sub-agent delegation, markdown content, and interrupt handling. ```typescript import { ChatMessage } from "@/app/components/ChatMessage"; import { ToolCall, ActionRequest, ReviewConfig } from "@/app/types/types"; // ToolCall structure interface ToolCall { id: string; name: string; args: Record<string, unknown>; result?: string; status: "pending" | "completed" | "error" | "interrupted"; } // ActionRequest for tool approval interrupts interface ActionRequest { name: string; args: Record<string, unknown>; description?: string; } // Usage in message list function MessageList({ messages, processedToolCalls }) { return ( <> {messages.map((message, index) => ( <ChatMessage key={message.id} message={message} toolCalls={processedToolCalls[message.id] || []} isLoading={isLoading} actionRequestsMap={index === messages.length - 1 ? actionRequests : undefined} reviewConfigsMap={index === messages.length - 1 ? reviewConfigs : undefined} ui={customUIComponents} stream={streamInstance} onResumeInterrupt={(value) => resumeInterrupt(value)} graphId="research" /> ))} </> ); } ``` ## ConfigDialog Component - Settings UI The `ConfigDialog` component provides a modal dialog for configuring the LangGraph deployment connection settings. ```typescript import { ConfigDialog } from "@/app/components/ConfigDialog"; import { StandaloneConfig } from "@/lib/config"; function SettingsPage() { const [isOpen, setIsOpen] = useState(false); const [config, setConfig] = useState<StandaloneConfig | null>(null); const handleSave = (newConfig: StandaloneConfig) => { setConfig(newConfig); // Config is saved to localStorage automatically console.log("Saved config:", newConfig); }; return ( <> <button onClick={() => setIsOpen(true)}>Settings</button> <ConfigDialog open={isOpen} onOpenChange={setIsOpen} onSave={handleSave} initialConfig={config || undefined} /> </> ); } ``` ## ThreadList Component - Thread Navigation The `ThreadList` component displays conversation threads grouped by status and time, with filtering capabilities and interrupt badges. ```typescript import { ThreadList } from "@/app/components/ThreadList"; function Sidebar() { const [interruptCount, setInterruptCount] = useState(0); return ( <ThreadList onThreadSelect={(threadId) => { // Navigate to selected thread setQueryParam("threadId", threadId); }} onMutateReady={(mutateFn) => { // Store mutate function for manual refresh threadMutateRef.current = mutateFn; }} onClose={() => { // Hide sidebar setSidebarOpen(false); }} onInterruptCountChange={(count) => { // Update badge showing threads needing attention setInterruptCount(count); }} /> ); } ``` ## Utility Functions - Message Processing Utility functions for extracting and formatting message content from various LangGraph message formats. ```typescript import { extractStringFromMessageContent, extractSubAgentContent, formatMessageForLLM, formatConversationForLLM, isPreparingToCallTaskTool } from "@/app/utils/utils"; import { Message } from "@langchain/langgraph-sdk"; // Extract text content from a message (handles string and array formats) const message: Message = { id: "msg_123", type: "ai", content: [ { type: "text", text: "Here is my analysis:" }, { type: "text", text: "The data shows..." } ] }; const text = extractStringFromMessageContent(message); // Output: "Here is my analysis:The data shows..." // Extract content from sub-agent data (tries description, prompt, result fields) const subAgentInput = { description: "Analyze sales data", prompt: "...", other: "data" }; const content = extractSubAgentContent(subAgentInput); // Output: "Analyze sales data" // Format a single message for display const formatted = formatMessageForLLM(message); // Output: "Assistant (msg_123): Here is my analysis:The data shows..." // Format entire conversation const conversation = formatConversationForLLM(messages); // Output: "Human (id): Hello\n\n---\n\nAssistant (id): Hi there!" // Check if last message contains a task tool call const hasTaskCall = isPreparingToCallTaskTool(messages); // Output: true if last AI message has tool_calls with name "task" ``` ## Type Definitions Core TypeScript interfaces used throughout the application for type-safe agent interactions. ```typescript // Tool call representation interface ToolCall { id: string; name: string; args: Record<string, unknown>; result?: string; status: "pending" | "completed" | "error" | "interrupted"; } // Sub-agent delegation interface SubAgent { id: string; name: string; subAgentName: string; input: Record<string, unknown>; output?: Record<string, unknown>; status: "pending" | "active" | "completed" | "error"; } // File in agent state interface FileItem { path: string; content: string; } // Task in todo list interface TodoItem { id: string; content: string; status: "pending" | "in_progress" | "completed"; updatedAt?: Date; } // Thread metadata interface Thread { id: string; title: string; createdAt: Date; updatedAt: Date; } // Interrupt data for human-in-the-loop interface InterruptData { value: any; ns?: string[]; scope?: string; } // Tool approval request interface ActionRequest { name: string; args: Record<string, unknown>; description?: string; } // Review configuration for approvals interface ReviewConfig { actionName: string; allowedDecisions?: string[]; } // Tool approval interrupt structure interface ToolApprovalInterruptData { action_requests: ActionRequest[]; review_configs?: ReviewConfig[]; } ``` ## Summary Deep Agents UI is designed for developers and teams building agentic AI applications with LangGraph. The primary use cases include interactive agent development and debugging through the chat interface, monitoring agent execution with real-time task tracking and file state visibility, managing conversation threads with status-based filtering, and implementing human-in-the-loop workflows through interrupt handling and tool approval mechanisms. Integration with LangGraph deployments follows a straightforward pattern: configure the deployment URL and assistant ID (either a UUID for cloud deployments or a graph name for local development), wrap your application with `ClientProvider`, and use the provided hooks (`useChat`, `useThreads`) and components (`ChatInterface`, `ThreadList`) to build your interface. The application supports both local development with `langgraph dev` and production deployments, with optional LangSmith authentication for secured environments.