Try Live
Add Docs
Rankings
Pricing
Docs
Install
Install
Docs
Pricing
More...
More...
Try Live
Rankings
Enterprise
Create API Key
Add Docs
Laravel AI SDK
https://github.com/laravel/ai
Admin
Laravel AI SDK provides a unified API for interacting with AI providers like OpenAI, Anthropic, and
...
Tokens:
11,444
Snippets:
68
Trust Score:
9.4
Update:
6 days ago
Context
Skills
Chat
Benchmark
84.5
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Laravel AI SDK The Laravel AI SDK provides a unified, expressive API for interacting with AI providers such as OpenAI, Anthropic, Gemini, Azure OpenAI, Mistral, Groq, DeepSeek, Ollama, and more. The SDK enables building intelligent agents with tools and structured output, generating images, synthesizing and transcribing audio, creating vector embeddings, and performing document reranking - all using a consistent, Laravel-friendly interface. The SDK follows Laravel's conventions with facades, service providers, and artisan commands for scaffolding. It supports multiple AI providers with automatic failover, streaming responses, queued operations, conversation memory, and comprehensive testing utilities through fake responses. The architecture uses a gateway pattern to abstract provider-specific implementations while exposing a clean, fluent API. ## Creating an Agent Agents are the primary way to interact with AI models. Create a custom agent class by implementing the `Agent` interface with `instructions()`, `messages()`, and `tools()` methods. Use the `Promptable` trait to gain prompt, stream, queue, and broadcast capabilities. ```php <?php namespace App\Agents; use Laravel\Ai\Contracts\Agent; use Laravel\Ai\Contracts\Conversational; use Laravel\Ai\Contracts\HasTools; use Laravel\Ai\Contracts\Tool; use Laravel\Ai\Messages\Message; use Laravel\Ai\Promptable; use Stringable; class CustomerSupportAgent implements Agent, Conversational, HasTools { use Promptable; public function instructions(): Stringable|string { return 'You are a helpful customer support assistant for an e-commerce platform. Help users with order tracking, returns, and product questions.'; } /** * @return Message[] */ public function messages(): iterable { return []; } /** * @return Tool[] */ public function tools(): iterable { return [ new \App\Tools\OrderLookupTool(), new \App\Tools\ProductSearchTool(), ]; } } // Usage $response = CustomerSupportAgent::make()->prompt('Where is my order #12345?'); echo $response->text; // Output: "I found your order #12345. It was shipped on January 15th and is currently in transit..." ``` ## Using the agent() Helper Function The `agent()` helper function creates anonymous agents for quick, ad-hoc AI interactions without defining a full agent class. It accepts instructions, conversation messages, tools, and an optional schema for structured output. ```php <?php use function Laravel\Ai\agent; // Simple text generation $response = agent('You are a helpful writing assistant.') ->prompt('Write a haiku about Laravel'); echo $response->text; // Output: "Elegant framework / Artisan commands bring joy / Code flows like a stream" // With conversation history $response = agent( instructions: 'You are a math tutor.', messages: [ ['role' => 'user', 'content' => 'What is 2 + 2?'], ['role' => 'assistant', 'content' => 'The answer is 4.'], ] )->prompt('And what if we multiply that by 3?'); echo $response->text; // Output: "If we take 4 and multiply it by 3, we get 12." ``` ## Structured Output with Schema Create agents that return structured JSON data by implementing `HasStructuredOutput` and defining a schema. The AI model will return data matching your specified structure. ```php <?php namespace App\Agents; use Illuminate\Contracts\JsonSchema\JsonSchema; use Laravel\Ai\Contracts\Agent; use Laravel\Ai\Contracts\Conversational; use Laravel\Ai\Contracts\HasStructuredOutput; use Laravel\Ai\Contracts\HasTools; use Laravel\Ai\Promptable; class SentimentAnalyzer implements Agent, Conversational, HasStructuredOutput, HasTools { use Promptable; public function instructions(): string { return 'Analyze the sentiment of the provided text.'; } public function messages(): iterable { return []; } public function tools(): iterable { return []; } public function schema(JsonSchema $schema): array { return [ 'sentiment' => $schema->string()->enum(['positive', 'negative', 'neutral'])->required(), 'confidence' => $schema->number()->minimum(0)->maximum(1)->required(), 'keywords' => $schema->array($schema->string())->required(), ]; } } // Usage $response = SentimentAnalyzer::make()->prompt('I absolutely love this product! Best purchase ever!'); $data = json_decode($response->text, true); // $data = [ // 'sentiment' => 'positive', // 'confidence' => 0.95, // 'keywords' => ['love', 'best', 'purchase'] // ] ``` ## Creating Tools for Agents Tools extend agent capabilities by allowing them to execute actions. Implement the `Tool` interface with `description()`, `schema()`, and `handle()` methods. Tools receive a `Request` object containing the AI-provided arguments. ```php <?php namespace App\Tools; use Illuminate\Contracts\JsonSchema\JsonSchema; use Laravel\Ai\Contracts\Tool; use Laravel\Ai\Tools\Request; use App\Models\Order; use Stringable; class OrderLookupTool implements Tool { public function description(): Stringable|string { return 'Look up order details by order ID or customer email.'; } public function handle(Request $request): Stringable|string { $orderId = $request->get('order_id'); $email = $request->get('email'); $order = Order::where('id', $orderId) ->orWhere('customer_email', $email) ->first(); if (!$order) { return 'Order not found.'; } return json_encode([ 'order_id' => $order->id, 'status' => $order->status, 'shipped_at' => $order->shipped_at?->toDateString(), 'items' => $order->items->pluck('name')->toArray(), ]); } public function schema(JsonSchema $schema): array { return [ 'order_id' => $schema->string()->description('The order ID to look up'), 'email' => $schema->string()->format('email')->description('Customer email address'), ]; } } // The agent automatically calls this tool when needed $response = CustomerSupportAgent::make() ->prompt('Can you check the status of order #ORD-2024-001?'); ``` ## Streaming Responses Stream AI responses in real-time for better user experience. Use the `stream()` method to receive incremental text deltas as they're generated. Streaming responses implement `IteratorAggregate` and `Responsable` for easy integration. ```php <?php use function Laravel\Ai\agent; // Stream to console/logs $stream = agent('You are a storyteller.') ->stream('Tell me a short story about a brave knight.'); foreach ($stream as $event) { echo $event; // Each event contains a text delta } // Stream as HTTP response (in a controller) public function chat(Request $request) { return agent('You are a helpful assistant.') ->stream($request->input('message')); } // Use Vercel AI SDK protocol for frontend integration public function streamWithVercel(Request $request) { return agent('You are a helpful assistant.') ->stream($request->input('message')) ->usingVercelDataProtocol(); } // Execute callback when streaming completes agent('You are a helpful assistant.') ->stream('Explain quantum computing') ->then(function ($response) { // $response contains full text, usage stats, and metadata Log::info('Completed stream', [ 'tokens' => $response->usage->totalTokens, ]); }) ->each(fn ($event) => broadcast(new StreamEvent($event))); ``` ## Broadcasting Streamed Responses Broadcast AI responses over WebSockets in real-time using Laravel's broadcasting system. Perfect for chat applications and real-time AI interactions. ```php <?php use App\Agents\ChatAgent; use Illuminate\Broadcasting\Channel; use Illuminate\Broadcasting\PrivateChannel; // Broadcast to a public channel $response = ChatAgent::make() ->broadcast( prompt: 'What are the top 5 programming languages?', channels: new Channel('chat-room'), ); // Broadcast to a private channel (e.g., user-specific) $response = ChatAgent::make() ->broadcastNow( prompt: $userMessage, channels: new PrivateChannel('chat.'.$user->id), ); // Queue the broadcast for background processing $response = ChatAgent::make() ->broadcastOnQueue( prompt: 'Generate a detailed report on sales data.', channels: [ new PrivateChannel('reports.'.$user->id), new Channel('admin-notifications'), ], ); ``` ## Conversation Memory Enable conversation memory using the `RemembersConversations` trait to maintain context across multiple interactions. Conversations are automatically stored and retrieved. ```php <?php namespace App\Agents; use Laravel\Ai\Concerns\RemembersConversations; use Laravel\Ai\Contracts\Agent; use Laravel\Ai\Contracts\Conversational; use Laravel\Ai\Contracts\HasTools; use Laravel\Ai\Promptable; class ConversationalAssistant implements Agent, Conversational, HasTools { use Promptable, RemembersConversations; public function instructions(): string { return 'You are a helpful assistant that remembers previous conversations.'; } public function tools(): iterable { return []; } protected function maxConversationMessages(): int { return 50; // Limit context window } } // Start a new conversation for a user $response = ConversationalAssistant::make() ->forUser($user) ->prompt('My name is John and I like Laravel.'); // Continue the same conversation later $response = ConversationalAssistant::make() ->continue($response->conversationId, as: $user) ->prompt('What is my name?'); echo $response->text; // "Your name is John..." // Continue the user's most recent conversation $response = ConversationalAssistant::make() ->continueLastConversation(as: $user) ->prompt('What framework do I like?'); echo $response->text; // "You mentioned you like Laravel..." ``` ## Specifying Providers and Models Configure which AI provider and model to use for each request. Support multiple providers with automatic failover when one fails. ```php <?php use function Laravel\Ai\agent; use Laravel\Ai\Attributes\Provider; use Laravel\Ai\Attributes\Model; use Laravel\Ai\Attributes\UseSmartestModel; use Laravel\Ai\Attributes\UseCheapestModel; // Specify provider and model inline $response = agent('You are a helpful assistant.') ->prompt( prompt: 'Explain machine learning', provider: 'anthropic', model: 'claude-sonnet-4-20250514' ); // Use provider failover (tries each in order) $response = agent('You are a helpful assistant.') ->prompt( prompt: 'Explain machine learning', provider: [ 'openai' => 'gpt-4o', 'anthropic' => 'claude-sonnet-4-20250514', 'gemini' => 'gemini-2.0-flash', ] ); // Use attributes on agent classes #[Provider('anthropic')] #[Model('claude-sonnet-4-20250514')] class ClaudeAgent implements Agent, Conversational, HasTools { use Promptable; // ... } // Auto-select smartest or cheapest model #[UseSmartestModel] class ComplexReasoningAgent implements Agent, Conversational, HasTools { use Promptable; // ... } #[UseCheapestModel] class SimpleSummaryAgent implements Agent, Conversational, HasTools { use Promptable; // ... } ``` ## Image Generation Generate images using AI providers that support image generation. Configure size, quality, and provide reference images for editing. ```php <?php use Laravel\Ai\PendingResponses\PendingImageGeneration; use Laravel\Ai\Files\Image; // Basic image generation $response = (new PendingImageGeneration('A futuristic cityscape at sunset with flying cars')) ->landscape() ->quality('high') ->generate(); // Save the generated image Storage::put('images/cityscape.png', $response->content); // With specific provider $response = (new PendingImageGeneration('A cute robot mascot for a tech company')) ->square() ->generate(provider: 'openai', model: 'dall-e-3'); // Image editing with reference images $response = (new PendingImageGeneration('Add a rainbow in the sky')) ->attachments([ Image::fromStorage('images/landscape.jpg'), ]) ->generate(); // Different aspect ratios $portrait = (new PendingImageGeneration('Professional headshot portrait')) ->portrait() ->quality('high') ->generate(); // Queue image generation for background processing $queuedResponse = (new PendingImageGeneration('Complex detailed artwork')) ->landscape() ->quality('high') ->queue(); ``` ## Audio Generation (Text-to-Speech) Convert text to speech using AI providers. Customize voice, provide instructions for tone and style. ```php <?php use Laravel\Ai\Audio; // Basic audio generation $response = Audio::of('Welcome to our application! We hope you enjoy your experience.') ->generate(); Storage::put('audio/welcome.mp3', $response->content); // Specify voice gender $response = Audio::of('The weather today will be sunny with a high of 75 degrees.') ->male() ->generate(); // Custom voice and instructions $response = Audio::of('Breaking news: Scientists discover new exoplanet!') ->voice('news-anchor') ->instructions('Speak with urgency and excitement, like a news anchor delivering breaking news.') ->timeout(60) ->generate(provider: 'eleven'); // Queue for background processing $queuedResponse = Audio::of($longArticleText) ->female() ->queue(); ``` ## Audio Transcription (Speech-to-Text) Transcribe audio files to text with support for language detection, speaker diarization, and various audio sources. ```php <?php use Laravel\Ai\PendingResponses\PendingTranscriptionGeneration; use Laravel\Ai\Files\Audio; // Transcribe from local file $audio = Audio::fromPath('/path/to/recording.mp3'); $response = (new PendingTranscriptionGeneration($audio)) ->generate(); echo $response->text; // Transcribe from storage $audio = Audio::fromStorage('recordings/meeting.wav', 's3'); $response = (new PendingTranscriptionGeneration($audio)) ->language('en') ->generate(); // Enable speaker diarization (identify different speakers) $response = (new PendingTranscriptionGeneration(Audio::fromPath('meeting.mp3'))) ->diarize() ->timeout(120) ->generate(); // Transcribe from URL $audio = Audio::fromUrl('https://example.com/podcast.mp3'); $response = (new PendingTranscriptionGeneration($audio)) ->generate(provider: 'openai', model: 'whisper-1'); // Queue long transcriptions $queuedResponse = (new PendingTranscriptionGeneration($audio)) ->language('es') ->queue(); ``` ## Embeddings Generation Generate vector embeddings for semantic search, similarity comparisons, and RAG (Retrieval Augmented Generation) applications. ```php <?php use Laravel\Ai\Embeddings; // Generate embeddings for multiple inputs $response = Embeddings::for([ 'Laravel is a PHP web framework', 'Vue.js is a JavaScript framework', 'React is a JavaScript library for building UIs', ])->generate(); // Access the embeddings (array of float vectors) foreach ($response->embeddings as $index => $vector) { echo "Document $index: " . count($vector) . " dimensions\n"; } // Specify dimensions $response = Embeddings::for(['Hello world']) ->dimensions(1536) ->generate(provider: 'openai', model: 'text-embedding-3-small'); // Enable caching for repeated queries $response = Embeddings::for(['Frequently searched query']) ->cache(seconds: 86400) // Cache for 24 hours ->generate(); // Generate with specific provider $response = Embeddings::for($documents) ->timeout(60) ->generate(provider: 'voyageai', model: 'voyage-3'); // Generate fake embeddings for testing $fakeVector = Embeddings::fakeEmbedding(dimensions: 1536); ``` ## Document Reranking Rerank documents based on relevance to a query. Useful for improving search results or RAG retrieval quality. ```php <?php use Laravel\Ai\Reranking; // Basic reranking $documents = [ 'Laravel is a PHP framework for web development.', 'Python is a versatile programming language.', 'PHP was created by Rasmus Lerdorf.', 'Web frameworks help build applications faster.', ]; $response = Reranking::of($documents) ->rerank(query: 'Tell me about PHP web development'); // Results are sorted by relevance score foreach ($response->results as $result) { echo "Score: {$result->score} - {$result->document}\n"; } // Output: // Score: 0.95 - Laravel is a PHP framework for web development. // Score: 0.82 - Web frameworks help build applications faster. // Score: 0.71 - PHP was created by Rasmus Lerdorf. // Score: 0.23 - Python is a versatile programming language. // Limit results $response = Reranking::of($documents) ->limit(3) ->rerank('PHP frameworks', provider: 'cohere'); ``` ## Agent Middleware Create middleware to intercept and modify agent prompts and responses. Useful for logging, caching, rate limiting, or transforming data. ```php <?php namespace App\Middleware; use Closure; use Laravel\Ai\Prompts\AgentPrompt; use Laravel\Ai\Responses\AgentResponse; use Illuminate\Support\Facades\Log; class LogAgentInteractions { public function handle(AgentPrompt $prompt, Closure $next) { Log::info('Agent prompt', [ 'agent' => get_class($prompt->agent), 'prompt' => $prompt->prompt, ]); return $next($prompt)->then(function (AgentResponse $response) use ($prompt) { Log::info('Agent response', [ 'agent' => get_class($prompt->agent), 'tokens' => $response->usage->totalTokens, ]); }); } } // Apply middleware to an agent use Laravel\Ai\Contracts\HasMiddleware; class LoggedAgent implements Agent, Conversational, HasTools, HasMiddleware { use Promptable; public function middleware(): array { return [ LogAgentInteractions::class, RateLimitMiddleware::class, ]; } public function instructions(): string { return 'You are a helpful assistant.'; } public function messages(): iterable { return []; } public function tools(): iterable { return []; } } ``` ## Queueing Agent Operations Queue agent prompts for background processing. Perfect for long-running AI tasks that shouldn't block the request. ```php <?php use App\Agents\ReportGenerator; use App\Agents\DataAnalyzer; // Queue a prompt $queuedResponse = ReportGenerator::make() ->queue( prompt: 'Generate a comprehensive sales report for Q4 2024', provider: 'anthropic', model: 'claude-sonnet-4-20250514' ); // Configure the queued job $queuedResponse = DataAnalyzer::make() ->queue('Analyze customer churn patterns') ->onQueue('ai-tasks') ->onConnection('redis') ->delay(now()->addMinutes(5)); // Handle completion via events or job chaining // The response will be available when the job completes ``` ## Attachments (Images in Prompts) Attach images to agent prompts for vision/multimodal AI capabilities. Support various image sources including uploads, storage, URLs, and base64. ```php <?php use function Laravel\Ai\agent; use Laravel\Ai\Files\Image; // Attach image from URL $response = agent('You are an image analysis assistant.') ->prompt( prompt: 'Describe what you see in this image.', attachments: [ Image::fromUrl('https://example.com/photo.jpg'), ] ); // Attach image from storage $response = agent('You are a product description writer.') ->prompt( prompt: 'Write a compelling product description.', attachments: [ Image::fromStorage('products/shoe.png', 's3'), ] ); // Attach uploaded file (in controller) public function analyze(Request $request) { $response = agent('Analyze the uploaded image.') ->prompt( prompt: $request->input('question', 'What is in this image?'), attachments: [ Image::fromUpload($request->file('image')), ] ); return response()->json(['analysis' => $response->text]); } // Multiple attachments from different sources $response = agent('Compare these images.') ->prompt( prompt: 'What are the differences between these two images?', attachments: [ Image::fromPath('/local/image1.png'), Image::fromBase64($base64Data, 'image/png'), ] ); ``` ## Testing with Fakes Use fake responses for testing AI interactions without making actual API calls. Assert prompts were sent with expected content. ```php <?php use Laravel\Ai\Ai; use Laravel\Ai\Audio; use Laravel\Ai\Embeddings; use App\Agents\CustomerSupportAgent; // Fake agent responses CustomerSupportAgent::fake([ 'Your order #12345 is on its way!', 'I can help you with returns.', ]); $response = CustomerSupportAgent::make()->prompt('Where is my order?'); $this->assertEquals('Your order #12345 is on its way!', $response->text); // Assert prompts were made CustomerSupportAgent::assertPrompted('order'); CustomerSupportAgent::assertPrompted(fn ($prompt) => str_contains($prompt, 'order')); CustomerSupportAgent::assertNotPrompted('refund'); // Fake with closures for dynamic responses CustomerSupportAgent::fake([ fn ($prompt) => "You asked: {$prompt}", ]); // Fake audio generation Audio::fake(['fake-audio-content']); $response = Audio::of('Hello world')->generate(); Audio::assertGenerated(fn ($text, $voice) => $text === 'Hello world'); // Fake embeddings Embeddings::fake([ [Embeddings::fakeEmbedding(1536)], ]); $response = Embeddings::for(['test'])->generate(); Embeddings::assertGenerated(fn ($inputs) => $inputs === ['test']); // Fake images use Laravel\Ai\PendingResponses\PendingImageGeneration; Ai::fakeImages(['fake-image-data']); $response = (new PendingImageGeneration('A cat'))->generate(); Ai::assertImageGenerated(fn ($prompt) => str_contains($prompt, 'cat')); ``` ## Configuration Configure AI providers, default settings, and caching in the `config/ai.php` file. Each provider requires its own API key and optional settings. ```php <?php // config/ai.php return [ // Default providers for different operations 'default' => 'openai', 'default_for_images' => 'gemini', 'default_for_audio' => 'openai', 'default_for_transcription' => 'openai', 'default_for_embeddings' => 'openai', 'default_for_reranking' => 'cohere', // Embedding cache settings 'caching' => [ 'embeddings' => [ 'cache' => false, 'store' => env('CACHE_STORE', 'database'), ], ], // Provider configurations 'providers' => [ 'openai' => [ 'driver' => 'openai', 'key' => env('OPENAI_API_KEY'), 'url' => env('OPENAI_URL', 'https://api.openai.com/v1'), ], 'anthropic' => [ 'driver' => 'anthropic', 'key' => env('ANTHROPIC_API_KEY'), ], 'gemini' => [ 'driver' => 'gemini', 'key' => env('GEMINI_API_KEY'), ], 'azure' => [ 'driver' => 'azure', 'key' => env('AZURE_OPENAI_API_KEY'), 'url' => env('AZURE_OPENAI_URL'), 'api_version' => env('AZURE_OPENAI_API_VERSION', '2024-10-21'), 'deployment' => env('AZURE_OPENAI_DEPLOYMENT', 'gpt-4o'), ], 'ollama' => [ 'driver' => 'ollama', 'key' => env('OLLAMA_API_KEY', ''), 'url' => env('OLLAMA_BASE_URL', 'http://localhost:11434'), ], 'cohere' => [ 'driver' => 'cohere', 'key' => env('COHERE_API_KEY'), ], ], ]; // .env configuration // OPENAI_API_KEY=sk-... // ANTHROPIC_API_KEY=sk-ant-... // GEMINI_API_KEY=... ``` ## Artisan Commands Use artisan commands to scaffold agents, tools, and middleware quickly. Generated files follow Laravel conventions and implement required interfaces. ```bash # Create a new agent php artisan make:agent CustomerSupportAgent # Create an agent with structured output php artisan make:agent SentimentAnalyzer --structured # Create a tool php artisan make:tool WeatherLookupTool # Create agent middleware php artisan make:agent-middleware LoggingMiddleware # Interactive chat in terminal php artisan ai:chat php artisan ai:chat --provider=anthropic --model=claude-sonnet-4-20250514 ``` The Laravel AI SDK simplifies building AI-powered applications by providing a unified interface across multiple providers, handling the complexities of API integration, streaming, queuing, and conversation management. The fluent API design makes it natural to chain operations and configure behavior, while the testing utilities enable confident development with comprehensive fakes and assertions. Integration patterns typically involve creating dedicated agent classes for specific use cases (customer support, content generation, data analysis), registering tools that connect to your application's domain logic, and leveraging middleware for cross-cutting concerns like logging and rate limiting. The SDK's support for queued operations and broadcasting makes it well-suited for both synchronous chat interfaces and background AI processing pipelines.