Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Add Docs
Agent Chat UI
https://github.com/langchain-ai/agent-chat-ui
Admin
🦜💬 Web app for interacting with any LangGraph agent (PY & TS) via a chat interface.
Tokens:
7,399
Snippets:
50
Trust Score:
9.2
Update:
3 months ago
Context
Skills
Chat
Benchmark
68.9
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Agent Chat UI Agent Chat UI is a Next.js application that provides a chat interface for interacting with any LangGraph server. It enables real-time streaming conversations with AI agents, supports multimodal inputs (images and PDFs), and provides human-in-the-loop (HITL) interrupt handling for agent actions requiring user approval. The application connects to LangGraph deployments via API and manages conversation threads with full state history. The core functionality includes streaming message handling, artifact rendering in side panels, thread management with branching capabilities, and file uploads. It supports both local development and production deployments through an API passthrough system or custom authentication. The UI automatically handles tool calls, renders markdown content, and provides controls for regenerating responses and editing messages. ## StreamProvider and useStreamContext The `StreamProvider` component wraps the application and manages the connection to a LangGraph server. It handles authentication, thread state, and message streaming. The `useStreamContext` hook provides access to the current stream state, messages, and methods for submitting new messages. ```tsx import { StreamProvider, useStreamContext } from "@/providers/Stream"; import { Message } from "@langchain/langgraph-sdk"; // Wrap your app with StreamProvider function App() { return ( <StreamProvider> <ChatInterface /> </StreamProvider> ); } // Use the stream context in child components function ChatInterface() { const stream = useStreamContext(); // Access current messages const messages = stream.messages; // Check loading state const isLoading = stream.isLoading; // Submit a new message const handleSend = (text: string) => { const newMessage: Message = { id: crypto.randomUUID(), type: "human", content: [{ type: "text", text }], }; stream.submit( { messages: [newMessage] }, { streamMode: ["values"], streamSubgraphs: true, streamResumable: true, optimisticValues: (prev) => ({ ...prev, messages: [...(prev.messages ?? []), newMessage], }), } ); }; // Stop an ongoing stream const handleStop = () => stream.stop(); // Switch to a different branch in conversation history const switchBranch = (branch: string) => stream.setBranch(branch); return ( <div> {messages.map((msg) => ( <div key={msg.id}>{msg.type}: {JSON.stringify(msg.content)}</div> ))} <button onClick={() => handleSend("Hello!")}>Send</button> {isLoading && <button onClick={handleStop}>Cancel</button>} </div> ); } ``` ## ThreadProvider and useThreads The `ThreadProvider` manages conversation threads, allowing users to switch between different chat sessions. The `useThreads` hook provides access to thread listing and management functionality. ```tsx import { ThreadProvider, useThreads } from "@/providers/Thread"; import { useEffect } from "react"; // Wrap your app with ThreadProvider (usually inside StreamProvider) function App() { return ( <ThreadProvider> <ThreadList /> </ThreadProvider> ); } function ThreadList() { const { threads, setThreads, getThreads, threadsLoading, setThreadsLoading } = useThreads(); // Fetch threads on mount useEffect(() => { setThreadsLoading(true); getThreads() .then(setThreads) .catch(console.error) .finally(() => setThreadsLoading(false)); }, []); if (threadsLoading) return <div>Loading threads...</div>; return ( <ul> {threads.map((thread) => ( <li key={thread.thread_id}> Thread: {thread.thread_id} <br /> Created: {new Date(thread.created_at).toLocaleDateString()} </li> ))} </ul> ); } ``` ## createClient The `createClient` function creates a LangGraph SDK client instance for direct API interactions with the LangGraph server. ```tsx import { createClient } from "@/providers/client"; // Create a client for LangGraph API interactions const client = createClient( "http://localhost:2024", // API URL "lsv2_pt_..." // Optional API key (required for deployed servers) ); // Search for threads const threads = await client.threads.search({ metadata: { graph_id: "agent" }, limit: 100, }); // Get thread state const state = await client.threads.getState(threadId); // Create a new run const run = await client.runs.create(threadId, assistantId, { input: { messages: [{ type: "human", content: "Hello" }] }, }); ``` ## useArtifact Hook The `useArtifact` hook enables rendering content in a side panel artifact view. This is useful for displaying generated content, code, or other rich media alongside the chat. ```tsx import { useArtifact } from "@/components/thread/artifact"; function WriterComponent({ title, content, description }: { title?: string; content?: string; description?: string; }) { const [Artifact, { open, setOpen, context, setContext }] = useArtifact(); return ( <> {/* Clickable card to toggle artifact */} <div onClick={() => setOpen(!open)} className="cursor-pointer rounded-lg border p-4" > <p className="font-medium">{title}</p> <p className="text-sm text-gray-500">{description}</p> </div> {/* Artifact content rendered in side panel */} <Artifact title={title}> <div className="p-4 whitespace-pre-wrap">{content}</div> </Artifact> </> ); } // Using artifact context for state sharing function ArtifactWithContext() { const [Artifact, { setContext }] = useArtifact(); // Set context that will be passed to the next LangGraph run const handleUpdate = (data: Record<string, unknown>) => { setContext(data); }; return ( <Artifact title="Editor"> <button onClick={() => handleUpdate({ modified: true })}> Save Changes </button> </Artifact> ); } ``` ## useFileUpload Hook The `useFileUpload` hook manages file uploads for multimodal chat messages, supporting images (JPEG, PNG, GIF, WEBP) and PDFs through drag-and-drop, paste, or file input. ```tsx import { useFileUpload, SUPPORTED_FILE_TYPES } from "@/hooks/use-file-upload"; function ChatInput() { const { contentBlocks, // Array of uploaded file content blocks setContentBlocks, // Setter for content blocks handleFileUpload, // Handler for <input type="file"> dropRef, // Ref for drag-and-drop zone removeBlock, // Remove a specific block by index resetBlocks, // Clear all uploaded blocks dragOver, // Boolean indicating drag state handlePaste, // Handler for paste events } = useFileUpload(); return ( <div ref={dropRef} className={dragOver ? "border-2 border-dashed border-blue-500" : "border"} > {/* Preview uploaded files */} {contentBlocks.map((block, idx) => ( <div key={idx} className="flex items-center gap-2"> <span> {block.type === "image" ? block.metadata?.name : block.metadata?.filename} </span> <button onClick={() => removeBlock(idx)}>Remove</button> </div> ))} {/* File input */} <input type="file" onChange={handleFileUpload} multiple accept={SUPPORTED_FILE_TYPES.join(",")} /> {/* Text input with paste support */} <textarea onPaste={handlePaste} placeholder="Type or paste images..." /> </div> ); } ``` ## fileToContentBlock Utility The `fileToContentBlock` function converts uploaded files to LangChain-compatible content blocks for multimodal messages. ```tsx import { fileToContentBlock, isBase64ContentBlock } from "@/lib/multimodal-utils"; import { Message } from "@langchain/langgraph-sdk"; // Convert a file to a content block async function processUpload(file: File) { const contentBlock = await fileToContentBlock(file); // For images: { type: "image", mimeType: "image/png", data: "base64...", metadata: { name: "photo.png" } } // For PDFs: { type: "file", mimeType: "application/pdf", data: "base64...", metadata: { filename: "doc.pdf" } } return contentBlock; } // Create a multimodal message with text and images async function createMultimodalMessage(text: string, files: File[]): Promise<Message> { const contentBlocks = await Promise.all(files.map(fileToContentBlock)); return { id: crypto.randomUUID(), type: "human", content: [ { type: "text", text }, ...contentBlocks, ], }; } // Type guard to check if a content block is a base64 file/image function renderContent(blocks: unknown[]) { return blocks.map((block, idx) => { if (isBase64ContentBlock(block)) { if (block.type === "image") { return <img key={idx} src={`data:${block.mimeType};base64,${block.data}`} />; } return <span key={idx}>PDF: {block.metadata?.filename}</span>; } return null; }); } ``` ## Human-in-the-Loop (HITL) Interrupt Handling The Agent Inbox components handle human-in-the-loop interrupts when the LangGraph agent requires user approval for actions. ```tsx import { isAgentInboxInterruptSchema } from "@/lib/agent-inbox-interrupt"; import { ThreadView } from "@/components/thread/agent-inbox"; import { HITLRequest, Decision } from "@/components/thread/agent-inbox/types"; // Check if an interrupt matches the HITL schema function handleInterrupt(interrupt: unknown) { if (isAgentInboxInterruptSchema(interrupt)) { // Valid HITL interrupt with action_requests and review_configs const hitlValue = interrupt.value as HITLRequest; console.log("Action requests:", hitlValue.action_requests); // [{ name: "send_email", args: { to: "user@example.com", subject: "..." }, description: "..." }] console.log("Review configs:", hitlValue.review_configs); // [{ action_name: "send_email", allowed_decisions: ["approve", "edit", "reject"] }] return true; } return false; } // Submit a decision for an interrupt function submitDecision(stream: ReturnType<typeof useStreamContext>, decision: Decision) { // Approve action const approveDecision: Decision = { type: "approve" }; // Reject action with optional message const rejectDecision: Decision = { type: "reject", message: "Not appropriate" }; // Edit action before approval const editDecision: Decision = { type: "edit", edited_action: { name: "send_email", args: { to: "different@example.com", subject: "Modified subject" }, }, }; // Resume the graph with the decision stream.submit({ decision }, { streamMode: ["values"] }); } // Render HITL interrupt in your component function InterruptHandler({ interrupt }: { interrupt: unknown }) { if (!isAgentInboxInterruptSchema(interrupt)) { return <div>Generic interrupt: {JSON.stringify(interrupt)}</div>; } return <ThreadView interrupt={interrupt} />; } ``` ## Environment Variables Configuration Configure the application using environment variables to bypass the setup form and connect to LangGraph servers. ```bash # .env file for local development NEXT_PUBLIC_API_URL=http://localhost:2024 NEXT_PUBLIC_ASSISTANT_ID=agent # Production configuration with API passthrough NEXT_PUBLIC_API_URL=https://my-website.com/api NEXT_PUBLIC_ASSISTANT_ID=my-agent LANGGRAPH_API_URL=https://my-agent.default.us.langgraph.app LANGSMITH_API_KEY=lsv2_pt_your_api_key_here ``` ```tsx // Using environment variables in your app const apiUrl = process.env.NEXT_PUBLIC_API_URL || "http://localhost:2024"; const assistantId = process.env.NEXT_PUBLIC_ASSISTANT_ID || "agent"; // The StreamProvider automatically uses these values // No setup form will be shown when both are configured ``` ## Hiding Messages and Tool Calls Control message visibility in the chat UI by prefixing message IDs or using configuration tags on the LangGraph server side. ```python # Python: Hide messages from streaming from langchain_anthropic import ChatAnthropic # Prevent live streaming (message still appears after completion) model = ChatAnthropic().with_config( config={"tags": ["langsmith:nostream"]} ) # Hide messages permanently (prefix ID before saving to state) result = model.invoke([messages]) result.id = f"do-not-render-{result.id}" return {"messages": [result]} ``` ```typescript // TypeScript: Hide messages from streaming import { ChatAnthropic } from "@langchain/anthropic"; // Prevent live streaming const model = new ChatAnthropic() .withConfig({ tags: ["langsmith:nostream"] }); // Hide messages permanently const result = await model.invoke([messages]); result.id = `do-not-render-${result.id}`; return { messages: [result] }; ``` ```tsx // Client-side: The UI automatically filters messages import { DO_NOT_RENDER_ID_PREFIX } from "@/lib/ensure-tool-responses"; // Messages with this prefix are hidden const shouldRender = !message.id?.startsWith(DO_NOT_RENDER_ID_PREFIX); // DO_NOT_RENDER_ID_PREFIX = "do-not-render-" // Toggle tool call visibility in the UI const [hideToolCalls, setHideToolCalls] = useQueryState( "hideToolCalls", parseAsBoolean.withDefault(false) ); ``` ## Summary Agent Chat UI serves as a complete frontend solution for LangGraph-powered AI agents, providing real-time chat with streaming responses, multimodal support for images and PDFs, and sophisticated human-in-the-loop workflows. The main use cases include building customer-facing chat interfaces for AI agents, internal tools requiring human approval for agent actions, and development/testing environments for LangGraph deployments. Integration with LangGraph servers is straightforward: configure environment variables for your deployment URL and assistant ID, wrap your app with `StreamProvider` and `ThreadProvider`, and use the provided hooks (`useStreamContext`, `useThreads`, `useArtifact`, `useFileUpload`) to build custom chat experiences. For production deployments, the API passthrough pattern or custom authentication can be implemented to secure connections without exposing API keys to clients.