Try Live
Add Docs
Rankings
Pricing
Docs
Install
Theme
Install
Docs
Pricing
More...
More...
Try Live
Rankings
Enterprise
Create API Key
Add Docs
AI Elements
https://github.com/vercel/ai-elements
Admin
AI Elements is a customizable React component library built on shadcn/ui to accelerate the
...
Tokens:
82,984
Snippets:
274
Trust Score:
10
Update:
3 days ago
Context
Skills
Chat
Benchmark
80.8
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# AI Elements AI Elements is a React component library built on top of shadcn/ui designed specifically for building AI-native applications. It provides pre-built, customizable components for conversations, messages, code blocks, reasoning displays, tool execution, and more. The library integrates seamlessly with the Vercel AI SDK and follows the composable component pattern established by shadcn/ui. The project includes a CLI tool for easy installation, a comprehensive set of React components organized into categories (chatbot, code, voice, workflow, utilities), and full TypeScript support. Components are designed to be copied into your project for maximum customization while maintaining consistent styling through Tailwind CSS and CSS Variables mode. ## Installation ### CLI Installation Install all components or specific ones using the AI Elements CLI. ```bash # Install all components npx ai-elements@latest # Install a specific component npx ai-elements@latest add message # Install multiple components npx ai-elements@latest add conversation prompt-input code-block # Alternative: Using shadcn CLI directly npx shadcn@latest add https://elements.ai-sdk.dev/api/registry/all.json # Install specific component via shadcn npx shadcn@latest add https://elements.ai-sdk.dev/api/registry/message.json ``` ## Conversation Component The Conversation component wraps messages and provides automatic scrolling, empty states, and download functionality. ```tsx "use client"; import { useChat } from "@ai-sdk/react"; import { Conversation, ConversationContent, ConversationDownload, ConversationEmptyState, ConversationScrollButton, } from "@/components/ai-elements/conversation"; import { Message, MessageContent, MessageResponse, } from "@/components/ai-elements/message"; import { MessageSquare } from "lucide-react"; export default function ChatPage() { const { messages, sendMessage, status } = useChat(); return ( <div className="flex flex-col h-[600px] border rounded-lg"> <Conversation> <ConversationContent> {messages.length === 0 ? ( <ConversationEmptyState icon={<MessageSquare className="size-12" />} title="Start a conversation" description="Type a message below to begin chatting" /> ) : ( messages.map((message) => ( <Message from={message.role} key={message.id}> <MessageContent> {message.parts.map((part, i) => { if (part.type === "text") { return ( <MessageResponse key={`${message.id}-${i}`}> {part.text} </MessageResponse> ); } return null; })} </MessageContent> </Message> )) )} </ConversationContent> <ConversationDownload messages={messages} filename="chat.md" /> <ConversationScrollButton /> </Conversation> </div> ); } ``` ## Message Component The Message component displays user and assistant messages with support for branching, actions, and markdown rendering via Streamdown. ```tsx "use client"; import { Message, MessageContent, MessageResponse, MessageActions, MessageAction, MessageBranch, MessageBranchContent, MessageBranchSelector, MessageBranchPrevious, MessageBranchPage, MessageBranchNext, MessageToolbar, } from "@/components/ai-elements/message"; import { CopyIcon, ThumbsUpIcon, ThumbsDownIcon, RefreshCwIcon } from "lucide-react"; // Basic message usage export function BasicMessage() { return ( <Message from="assistant"> <MessageContent> <MessageResponse> {`# Hello World Here's some **markdown** content with code: \`\`\`javascript const greeting = "Hello, AI Elements!"; console.log(greeting); \`\`\` `} </MessageResponse> </MessageContent> <MessageToolbar> <MessageActions> <MessageAction tooltip="Copy" onClick={() => navigator.clipboard.writeText("content")}> <CopyIcon className="size-4" /> </MessageAction> <MessageAction tooltip="Like"> <ThumbsUpIcon className="size-4" /> </MessageAction> <MessageAction tooltip="Dislike"> <ThumbsDownIcon className="size-4" /> </MessageAction> </MessageActions> </MessageToolbar> </Message> ); } // Message with branching (multiple versions) export function BranchedMessage({ versions }: { versions: { id: string; content: string }[] }) { return ( <MessageBranch defaultBranch={0} onBranchChange={(index) => console.log("Branch:", index)}> <MessageBranchContent> {versions.map((version) => ( <Message from="assistant" key={version.id}> <MessageContent> <MessageResponse>{version.content}</MessageResponse> </MessageContent> </Message> ))} </MessageBranchContent> {versions.length > 1 && ( <MessageBranchSelector> <MessageBranchPrevious /> <MessageBranchPage /> <MessageBranchNext /> </MessageBranchSelector> )} </MessageBranch> ); } ``` ## PromptInput Component The PromptInput component provides a rich text input with file attachments, action menus, model selection, and submit handling. ```tsx "use client"; import { useState, useCallback } from "react"; import { PromptInput, PromptInputProvider, PromptInputTextarea, PromptInputBody, PromptInputFooter, PromptInputTools, PromptInputSubmit, PromptInputButton, PromptInputActionMenu, PromptInputActionMenuTrigger, PromptInputActionMenuContent, PromptInputActionAddAttachments, PromptInputActionAddScreenshot, usePromptInputAttachments, type PromptInputMessage, } from "@/components/ai-elements/prompt-input"; import { Attachments, Attachment, AttachmentPreview, AttachmentRemove, } from "@/components/ai-elements/attachments"; import { GlobeIcon } from "lucide-react"; // Display attachments within PromptInput function AttachmentsDisplay() { const attachments = usePromptInputAttachments(); if (attachments.files.length === 0) return null; return ( <Attachments variant="inline"> {attachments.files.map((file) => ( <Attachment key={file.id} data={file} onRemove={() => attachments.remove(file.id)} > <AttachmentPreview /> <AttachmentRemove /> </Attachment> ))} </Attachments> ); } export function ChatInput() { const [status, setStatus] = useState<"ready" | "submitted" | "streaming">("ready"); const handleSubmit = useCallback(async (message: PromptInputMessage) => { if (!message.text && !message.files?.length) return; setStatus("submitted"); console.log("Sending:", message.text, "Files:", message.files?.length ?? 0); // Simulate API call setTimeout(() => setStatus("streaming"), 200); setTimeout(() => setStatus("ready"), 2000); }, []); const handleStop = useCallback(() => { setStatus("ready"); console.log("Generation stopped"); }, []); return ( <PromptInputProvider> <PromptInput onSubmit={handleSubmit} accept="image/*" multiple globalDrop maxFiles={5} maxFileSize={10 * 1024 * 1024} // 10MB onError={(err) => console.error(err.code, err.message)} > <AttachmentsDisplay /> <PromptInputBody> <PromptInputTextarea placeholder="What would you like to know?" /> </PromptInputBody> <PromptInputFooter> <PromptInputTools> <PromptInputActionMenu> <PromptInputActionMenuTrigger tooltip="Add attachment" /> <PromptInputActionMenuContent> <PromptInputActionAddAttachments label="Upload files" /> <PromptInputActionAddScreenshot label="Take screenshot" /> </PromptInputActionMenuContent> </PromptInputActionMenu> <PromptInputButton tooltip="Web search"> <GlobeIcon className="size-4" /> <span>Search</span> </PromptInputButton> </PromptInputTools> <PromptInputSubmit status={status} onStop={handleStop} /> </PromptInputFooter> </PromptInput> </PromptInputProvider> ); } ``` ## CodeBlock Component The CodeBlock component provides syntax-highlighted code display with Shiki, copy functionality, and language selection. ```tsx "use client"; import { useState, useCallback } from "react"; import { CodeBlock, CodeBlockHeader, CodeBlockTitle, CodeBlockFilename, CodeBlockActions, CodeBlockCopyButton, CodeBlockLanguageSelector, CodeBlockLanguageSelectorTrigger, CodeBlockLanguageSelectorValue, CodeBlockLanguageSelectorContent, CodeBlockLanguageSelectorItem, } from "@/components/ai-elements/code-block"; import { FileIcon } from "lucide-react"; import type { BundledLanguage } from "shiki"; const codeExamples = { typescript: { code: `interface User { id: string; name: string; email: string; } async function fetchUser(id: string): Promise<User> { const response = await fetch(\`/api/users/\${id}\`); if (!response.ok) throw new Error("User not found"); return response.json(); }`, filename: "user.ts", }, python: { code: `from dataclasses import dataclass from typing import Optional @dataclass class User: id: str name: str email: str async def fetch_user(user_id: str) -> Optional[User]: response = await client.get(f"/api/users/{user_id}") return User(**response.json()) if response.ok else None`, filename: "user.py", }, }; export function CodeBlockExample() { const [language, setLanguage] = useState<"typescript" | "python">("typescript"); const { code, filename } = codeExamples[language]; return ( <CodeBlock code={code} language={language as BundledLanguage} showLineNumbers > <CodeBlockHeader> <CodeBlockTitle> <FileIcon size={14} /> <CodeBlockFilename>{filename}</CodeBlockFilename> </CodeBlockTitle> <CodeBlockActions> <CodeBlockLanguageSelector value={language} onValueChange={setLanguage}> <CodeBlockLanguageSelectorTrigger> <CodeBlockLanguageSelectorValue /> </CodeBlockLanguageSelectorTrigger> <CodeBlockLanguageSelectorContent> <CodeBlockLanguageSelectorItem value="typescript">TypeScript</CodeBlockLanguageSelectorItem> <CodeBlockLanguageSelectorItem value="python">Python</CodeBlockLanguageSelectorItem> </CodeBlockLanguageSelectorContent> </CodeBlockLanguageSelector> <CodeBlockCopyButton onCopy={() => console.log("Copied!")} onError={(err) => console.error("Copy failed:", err)} /> </CodeBlockActions> </CodeBlockHeader> </CodeBlock> ); } ``` ## Reasoning Component The Reasoning component displays AI thinking/reasoning content with auto-expand/collapse behavior during streaming. ```tsx "use client"; import { useState, useEffect } from "react"; import { Reasoning, ReasoningTrigger, ReasoningContent, } from "@/components/ai-elements/reasoning"; export function ReasoningExample() { const [content, setContent] = useState(""); const [isStreaming, setIsStreaming] = useState(true); useEffect(() => { // Simulate streaming reasoning content const steps = [ "Let me analyze this problem step by step.\n\n", "First, I need to understand the requirements...\n\n", "The solution involves three main components:\n", "1. Data validation\n", "2. Processing logic\n", "3. Output formatting", ]; let currentText = ""; let stepIndex = 0; const interval = setInterval(() => { if (stepIndex < steps.length) { currentText += steps[stepIndex]; setContent(currentText); stepIndex++; } else { setIsStreaming(false); clearInterval(interval); } }, 500); return () => clearInterval(interval); }, []); return ( <Reasoning isStreaming={isStreaming} defaultOpen={true} duration={isStreaming ? undefined : 5} > <ReasoningTrigger getThinkingMessage={(streaming, duration) => streaming ? "Thinking..." : `Thought for ${duration} seconds` } /> <ReasoningContent>{content}</ReasoningContent> </Reasoning> ); } // Controlled reasoning component export function ControlledReasoning({ reasoning }: { reasoning: { content: string; duration: number } }) { const [isOpen, setIsOpen] = useState(false); return ( <Reasoning open={isOpen} onOpenChange={setIsOpen} duration={reasoning.duration} > <ReasoningTrigger /> <ReasoningContent>{reasoning.content}</ReasoningContent> </Reasoning> ); } ``` ## Tool Component The Tool component displays AI tool invocations with their parameters, status, and results. ```tsx "use client"; import { Tool, ToolHeader, ToolContent, ToolInput, ToolOutput, getStatusBadge, } from "@/components/ai-elements/tool"; import type { ToolUIPart } from "ai"; interface ToolCallProps { name: string; state: ToolUIPart["state"]; input: Record<string, unknown>; output?: unknown; errorText?: string; } export function ToolCall({ name, state, input, output, errorText }: ToolCallProps) { return ( <Tool defaultOpen={state !== "output-available"}> <ToolHeader type="tool-call" state={state} title={`Tool: ${name}`} /> <ToolContent> <ToolInput input={input} /> <ToolOutput output={output} errorText={errorText} /> </ToolContent> </Tool> ); } // Example usage with search tool export function SearchToolExample() { return ( <ToolCall name="web_search" state="output-available" input={{ query: "React hooks best practices", source: "documentation", limit: 5, }} output={{ results: [ { title: "Rules of Hooks", url: "https://react.dev/warnings/invalid-hook-call-warning" }, { title: "useState Hook", url: "https://react.dev/reference/react/useState" }, ], totalResults: 2, }} /> ); } // Tool with error state export function FailedToolExample() { return ( <ToolCall name="api_call" state="output-error" input={{ endpoint: "/api/data", method: "GET" }} errorText="Connection timeout after 30 seconds" /> ); } ``` ## Sources Component The Sources component displays citation sources used by the AI in generating responses. ```tsx "use client"; import { Sources, SourcesTrigger, SourcesContent, Source, } from "@/components/ai-elements/sources"; interface SourceData { href: string; title: string; } export function SourcesExample({ sources }: { sources: SourceData[] }) { if (sources.length === 0) return null; return ( <Sources> <SourcesTrigger count={sources.length} /> <SourcesContent> {sources.map((source) => ( <Source key={source.href} href={source.href} title={source.title} /> ))} </SourcesContent> </Sources> ); } // Example with React documentation sources export function DocumentationSources() { const sources = [ { href: "https://react.dev/reference/react", title: "React Documentation" }, { href: "https://react.dev/reference/react-dom", title: "React DOM Reference" }, { href: "https://react.dev/learn", title: "React Tutorial" }, ]; return <SourcesExample sources={sources} />; } ``` ## Suggestion Component The Suggestion component displays clickable prompt suggestions for users. ```tsx "use client"; import { useCallback } from "react"; import { Suggestions, Suggestion } from "@/components/ai-elements/suggestion"; const prompts = [ "What are the latest trends in AI?", "How does machine learning work?", "Explain quantum computing", "Best practices for React development", ]; export function SuggestionsList({ onSelect }: { onSelect: (text: string) => void }) { const handleClick = useCallback((suggestion: string) => { onSelect(suggestion); }, [onSelect]); return ( <Suggestions className="px-4"> {prompts.map((prompt) => ( <Suggestion key={prompt} suggestion={prompt} onClick={handleClick} /> ))} </Suggestions> ); } ``` ## ModelSelector Component The ModelSelector component provides a searchable dialog for selecting AI models from various providers. ```tsx "use client"; import { useState, useCallback } from "react"; import { ModelSelector, ModelSelectorTrigger, ModelSelectorContent, ModelSelectorInput, ModelSelectorList, ModelSelectorEmpty, ModelSelectorGroup, ModelSelectorItem, ModelSelectorLogo, ModelSelectorLogoGroup, ModelSelectorName, } from "@/components/ai-elements/model-selector"; import { Button } from "@/components/ui/button"; import { CheckIcon } from "lucide-react"; const models = [ { id: "gpt-4o", name: "GPT-4o", provider: "openai", providers: ["openai", "azure"] }, { id: "claude-sonnet-4", name: "Claude 4 Sonnet", provider: "anthropic", providers: ["anthropic", "google"] }, { id: "gemini-2.0-flash", name: "Gemini 2.0 Flash", provider: "google", providers: ["google"] }, ]; export function ModelSelectorExample() { const [selectedModel, setSelectedModel] = useState(models[0].id); const [open, setOpen] = useState(false); const selected = models.find((m) => m.id === selectedModel); const handleSelect = useCallback((modelId: string) => { setSelectedModel(modelId); setOpen(false); }, []); return ( <ModelSelector open={open} onOpenChange={setOpen}> <ModelSelectorTrigger asChild> <Button variant="outline"> {selected && <ModelSelectorLogo provider={selected.provider} />} <ModelSelectorName>{selected?.name ?? "Select model"}</ModelSelectorName> </Button> </ModelSelectorTrigger> <ModelSelectorContent> <ModelSelectorInput placeholder="Search models..." /> <ModelSelectorList> <ModelSelectorEmpty>No models found.</ModelSelectorEmpty> {["openai", "anthropic", "google"].map((provider) => ( <ModelSelectorGroup key={provider} heading={provider.charAt(0).toUpperCase() + provider.slice(1)}> {models.filter((m) => m.provider === provider).map((model) => ( <ModelSelectorItem key={model.id} value={model.id} onSelect={() => handleSelect(model.id)} > <ModelSelectorLogo provider={model.provider} /> <ModelSelectorName>{model.name}</ModelSelectorName> <ModelSelectorLogoGroup> {model.providers.map((p) => ( <ModelSelectorLogo key={p} provider={p} /> ))} </ModelSelectorLogoGroup> {selectedModel === model.id && <CheckIcon className="ml-auto size-4" />} </ModelSelectorItem> ))} </ModelSelectorGroup> ))} </ModelSelectorList> </ModelSelectorContent> </ModelSelector> ); } ``` ## FileTree Component The FileTree component displays hierarchical file and folder structures with expand/collapse and selection support. ```tsx "use client"; import { useState, useCallback } from "react"; import { FileTree, FileTreeFolder, FileTreeFile, FileTreeIcon, FileTreeName, } from "@/components/ai-elements/file-tree"; import { FileCodeIcon, FileJsonIcon } from "lucide-react"; export function FileTreeExample() { const [selectedPath, setSelectedPath] = useState<string | undefined>(); const [expanded, setExpanded] = useState(new Set(["src", "src/components"])); const handleSelect = useCallback((path: string) => { setSelectedPath(path); console.log("Selected:", path); }, []); return ( <FileTree selectedPath={selectedPath} onSelect={handleSelect} expanded={expanded} onExpandedChange={setExpanded} > <FileTreeFolder path="src" name="src"> <FileTreeFolder path="src/components" name="components"> <FileTreeFile path="src/components/Button.tsx" name="Button.tsx" icon={<FileCodeIcon className="size-4 text-blue-500" />} /> <FileTreeFile path="src/components/Input.tsx" name="Input.tsx" icon={<FileCodeIcon className="size-4 text-blue-500" />} /> </FileTreeFolder> <FileTreeFile path="src/index.ts" name="index.ts" /> </FileTreeFolder> <FileTreeFile path="package.json" name="package.json" icon={<FileJsonIcon className="size-4 text-yellow-500" />} /> </FileTree> ); } ``` ## Attachments Component The Attachments component displays file attachments with preview, removal, and multiple layout variants. ```tsx "use client"; import { Attachments, Attachment, AttachmentPreview, AttachmentInfo, AttachmentRemove, AttachmentHoverCard, AttachmentHoverCardTrigger, AttachmentHoverCardContent, getMediaCategory, } from "@/components/ai-elements/attachments"; import type { FileUIPart } from "ai"; interface AttachmentFile extends FileUIPart { id: string; } export function AttachmentsGrid({ files, onRemove }: { files: AttachmentFile[]; onRemove: (id: string) => void; }) { return ( <Attachments variant="grid"> {files.map((file) => ( <Attachment key={file.id} data={file} onRemove={() => onRemove(file.id)}> <AttachmentPreview /> <AttachmentRemove /> </Attachment> ))} </Attachments> ); } export function AttachmentsList({ files, onRemove }: { files: AttachmentFile[]; onRemove: (id: string) => void; }) { return ( <Attachments variant="list"> {files.map((file) => ( <Attachment key={file.id} data={file} onRemove={() => onRemove(file.id)}> <AttachmentPreview /> <AttachmentInfo showMediaType /> <AttachmentRemove /> </Attachment> ))} </Attachments> ); } // Inline variant with hover preview export function AttachmentsInline({ files }: { files: AttachmentFile[] }) { return ( <Attachments variant="inline"> {files.map((file) => ( <AttachmentHoverCard key={file.id}> <AttachmentHoverCardTrigger asChild> <Attachment data={file}> <AttachmentPreview /> <AttachmentInfo /> </Attachment> </AttachmentHoverCardTrigger> <AttachmentHoverCardContent> <img src={file.url} alt={file.filename} className="max-w-xs rounded" /> </AttachmentHoverCardContent> </AttachmentHoverCard> ))} </Attachments> ); } ``` ## Backend API Route Example Create a Next.js API route to handle chat requests with the AI SDK. ```tsx // app/api/chat/route.ts import { streamText, UIMessage, convertToModelMessages } from "ai"; import { openai } from "@ai-sdk/openai"; export const maxDuration = 30; export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json(); const result = streamText({ model: openai("gpt-4o"), messages: await convertToModelMessages(messages), system: "You are a helpful assistant.", }); return result.toUIMessageStreamResponse(); } ``` ## Complete Chat Application Example A full-featured chat application combining multiple AI Elements components. ```tsx "use client"; import { useChat } from "@ai-sdk/react"; import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation"; import { Message, MessageContent, MessageResponse } from "@/components/ai-elements/message"; import { Reasoning, ReasoningTrigger, ReasoningContent } from "@/components/ai-elements/reasoning"; import { Sources, SourcesTrigger, SourcesContent, Source } from "@/components/ai-elements/sources"; import { Tool, ToolHeader, ToolContent, ToolInput, ToolOutput } from "@/components/ai-elements/tool"; import { PromptInput, PromptInputTextarea, PromptInputFooter, PromptInputSubmit, type PromptInputMessage } from "@/components/ai-elements/prompt-input"; import { Suggestions, Suggestion } from "@/components/ai-elements/suggestion"; export default function ChatApp() { const { messages, append, status, stop } = useChat(); const handleSubmit = async (message: PromptInputMessage) => { if (!message.text.trim()) return; await append({ role: "user", content: message.text }); }; const handleSuggestion = (text: string) => { append({ role: "user", content: text }); }; return ( <div className="flex flex-col h-screen max-w-4xl mx-auto p-4"> <Conversation className="flex-1"> <ConversationContent> {messages.map((msg) => ( <Message key={msg.id} from={msg.role}> <MessageContent> {msg.parts.map((part, i) => { switch (part.type) { case "reasoning": return ( <Reasoning key={i} isStreaming={status === "streaming"}> <ReasoningTrigger /> <ReasoningContent>{part.reasoning}</ReasoningContent> </Reasoning> ); case "source": return ( <Sources key={i}> <SourcesTrigger count={1} /> <SourcesContent> <Source href={part.source.url} title={part.source.title} /> </SourcesContent> </Sources> ); case "tool-invocation": return ( <Tool key={i}> <ToolHeader type={part.type} state={part.state} /> <ToolContent> <ToolInput input={part.input} /> <ToolOutput output={part.output} errorText={part.errorText} /> </ToolContent> </Tool> ); case "text": return <MessageResponse key={i}>{part.text}</MessageResponse>; default: return null; } })} </MessageContent> </Message> ))} </ConversationContent> <ConversationScrollButton /> </Conversation> {messages.length === 0 && ( <Suggestions className="mb-4"> {["Explain React hooks", "Write a TypeScript function", "Debug my code"].map((s) => ( <Suggestion key={s} suggestion={s} onClick={handleSuggestion} /> ))} </Suggestions> )} <PromptInput onSubmit={handleSubmit}> <PromptInputTextarea placeholder="Ask anything..." /> <PromptInputFooter> <PromptInputSubmit status={status} onStop={stop} /> </PromptInputFooter> </PromptInput> </div> ); } ``` ## Summary AI Elements provides a comprehensive set of React components for building AI-powered chat interfaces, code editors, and workflow visualizations. The library's composable architecture allows developers to mix and match components to create custom experiences while maintaining consistent styling through Tailwind CSS. Key integration patterns include using the Vercel AI SDK's `useChat` hook for state management, streaming responses with `streamText`, and handling tool invocations through the Tool component. The library excels at rapid prototyping of AI applications while remaining flexible enough for production use. Components like Conversation, Message, and PromptInput handle common chat UI patterns, while specialized components like CodeBlock, Reasoning, and Tool display AI-specific content. The CLI tool simplifies installation by automatically adding components to your shadcn/ui setup, and since all components are copied into your project, you have full control over customization without dependency lock-in.