# LLM Tornado LLM Tornado is a comprehensive .NET toolkit for building AI agents and workflows with unified access to 100+ API providers and vector databases. The library provides a single, consistent interface to work with models from OpenAI, Anthropic, Google, Cohere, Groq, DeepSeek, Mistral, and many others, along with self-hosted solutions like Ollama and vLLM. It eliminates vendor lock-in by allowing developers to write code once and switch between providers by simply changing a model parameter. The framework includes powerful abstractions for multi-agent systems, streaming responses, multimodal inputs (text, images, video, audio), function calling, embeddings, and vector databases. It supports cutting-edge protocols like Model Context Protocol (MCP) for tool integration and Agent-to-Agent (A2A) for cross-platform agent collaboration. With built-in support for Microsoft.Extensions.AI, LLM Tornado integrates seamlessly with Semantic Kernel and other .NET AI frameworks, making it production-ready for enterprise applications processing billions of tokens monthly. ## API Initialization ### Initialize with multiple providers ```csharp using LlmTornado; using LlmTornado.Code; TornadoApi api = new TornadoApi([ new ProviderAuthentication(LLmProviders.OpenAi, "OPEN_AI_KEY"), new ProviderAuthentication(LLmProviders.Anthropic, "ANTHROPIC_KEY"), new ProviderAuthentication(LLmProviders.Cohere, "COHERE_KEY"), new ProviderAuthentication(LLmProviders.Google, "GOOGLE_KEY"), new ProviderAuthentication(LLmProviders.Groq, "GROQ_KEY"), new ProviderAuthentication(LLmProviders.DeepSeek, "DEEP_SEEK_KEY"), new ProviderAuthentication(LLmProviders.Mistral, "MISTRAL_KEY"), new ProviderAuthentication(LLmProviders.XAi, "XAI_KEY"), new ProviderAuthentication(LLmProviders.Perplexity, "PERPLEXITY_KEY") ]); // Use any model - the correct API key is automatically selected string? response = await api.Chat.CreateConversation(ChatModel.OpenAi.Gpt4.O) .AppendSystemMessage("You are a helpful assistant.") .AppendUserInput("What is the capital of France?") .GetResponse(); Console.WriteLine(response); ``` ### Initialize with self-hosted providers ```csharp using LlmTornado; using LlmTornado.Chat.Models; // For Ollama (default port 11434) TornadoApi api = new TornadoApi(new Uri("http://localhost:11434")); await api.Chat.CreateConversation(new ChatModel("llama3.2:3b")) .AppendUserInput("Explain quantum computing in simple terms") .StreamResponse(Console.Write); ``` ## Chat Completion ### Basic chat completion with multiple providers ```csharp using LlmTornado; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.Code; TornadoApi api = new TornadoApi(LLmProviders.OpenAi, "YOUR_API_KEY"); ChatResult? result = await api.Chat.CreateChatCompletion(new ChatRequest { Model = ChatModel.OpenAi.Gpt4.Turbo, ResponseFormat = ChatRequestResponseFormats.Json, Messages = [ new ChatMessage(ChatMessageRoles.System, "Solve the math problem given by user, respond in JSON format."), new ChatMessage(ChatMessageRoles.User, "2+2=?") ] }); Console.WriteLine(result?.Choices?[0].Message?.Content ?? "no response"); ``` ### Streaming chat responses ```csharp using LlmTornado; using LlmTornado.Chat.Models; TornadoApi api = new TornadoApi("YOUR_API_KEY"); await api.Chat.CreateConversation(ChatModel.Anthropic.Claude3.Sonnet) .AppendSystemMessage("You are a fortune teller.") .AppendUserInput("What will my future bring?") .StreamResponse(Console.Write); ``` ### Structured JSON output ```csharp using LlmTornado; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.Code; TornadoApi api = new TornadoApi("YOUR_API_KEY"); Conversation chat = api.Chat.CreateConversation(new ChatRequest { Model = ChatModel.OpenAi.Gpt4.O240806, ResponseFormat = ChatRequestResponseFormats.StructuredJson("get_weather", new { type = "object", properties = new { city = new { type = "string" }, temperature = new { type = "number" } }, required = new List { "city" }, additionalProperties = false }) }); chat.AppendUserInput("What is the weather in Prague?"); ChatRichResponse response = await chat.GetResponseRich(); Console.WriteLine(response); ``` ## Function Calling (Tools) ### Basic tool calling with automatic resolution ```csharp using LlmTornado; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.ChatFunctions; using LlmTornado.Code; TornadoApi api = new TornadoApi("YOUR_API_KEY"); Conversation chat = api.Chat.CreateConversation(new ChatRequest { Model = ChatModel.OpenAi.Gpt4.O, Tools = [ new Tool(new ToolFunction("get_weather", "gets the current weather", new { type = "object", properties = new { location = new { type = "string", description = "The location for which the weather information is required." } }, required = new List { "location" } })) ] }) .AppendSystemMessage("You are a helpful assistant") .AppendUserInput("What is the weather like today in Prague?"); ChatStreamEventHandler handler = new ChatStreamEventHandler { MessageTokenHandler = (x) => { Console.Write(x); return Task.CompletedTask; }, FunctionCallHandler = (calls) => { calls.ForEach(x => x.Result = new FunctionResult(x, "A mild rain is expected around noon.", null)); return Task.CompletedTask; }, AfterFunctionCallsResolvedHandler = async (results, handler) => { await chat.StreamResponseRich(handler); } }; await chat.StreamResponseRich(handler); ``` ### Required tool calling (forced extraction) ```csharp using LlmTornado; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.ChatFunctions; using LlmTornado.Code; TornadoApi api = new TornadoApi("YOUR_API_KEY"); Conversation chat = api.Chat.CreateConversation(new ChatRequest { Model = ChatModel.OpenAi.Gpt4.O241120, Tools = [new Tool(new ToolFunction("get_weather", "gets the current weather"), true)], ToolChoice = new OutboundToolChoice(OutboundToolChoiceModes.Required) }); chat.AppendUserInput("Who are you?"); // Forces model to use the tool ChatRichResponse response = await chat.GetResponseRich(); ChatRichResponseBlock? block = response.Blocks?.FirstOrDefault(x => x.Type is ChatRichResponseBlockTypes.Function); if (block is not null) { Console.WriteLine($"Function called: {block.FunctionCall?.Name}"); Console.WriteLine($"Arguments: {block.FunctionCall?.Arguments}"); } ``` ## Embeddings ### Create embeddings with multiple providers ```csharp using LlmTornado; using LlmTornado.Embedding; using LlmTornado.Embedding.Models; using LlmTornado.Code; TornadoApi api = new TornadoApi([ new ProviderAuthentication(LLmProviders.OpenAi, "OPENAI_KEY"), new ProviderAuthentication(LLmProviders.Google, "GOOGLE_KEY"), new ProviderAuthentication(LLmProviders.Voyage, "VOYAGE_KEY") ]); // OpenAI embeddings EmbeddingResult? result1 = await api.Embeddings.CreateEmbedding( EmbeddingModel.OpenAi.Gen2.Ada, "The quick brown fox jumps over the lazy dog" ); float[]? embedding1 = result1?.Data.FirstOrDefault()?.Embedding; // Google embeddings EmbeddingResult? result2 = await api.Embeddings.CreateEmbedding( EmbeddingModel.Google.Gemini.Embedding4, "Machine learning is a subset of artificial intelligence" ); float[]? embedding2 = result2?.Data.FirstOrDefault()?.Embedding; Console.WriteLine($"OpenAI embedding dimension: {embedding1?.Length}"); Console.WriteLine($"Google embedding dimension: {embedding2?.Length}"); ``` ### Embeddings with vendor-specific extensions ```csharp using LlmTornado; using LlmTornado.Embedding; using LlmTornado.Embedding.Models; using LlmTornado.Embedding.Vendors.Voyage; TornadoApi api = new TornadoApi(LLmProviders.Voyage, "VOYAGE_KEY"); EmbeddingResult? result = await api.Embeddings.CreateEmbedding( EmbeddingModel.Voyage.Gen35.Default, "Document content to embed", 256, new EmbeddingRequestVendorExtensions { Voyage = new EmbeddingRequestVendorVoyageExtensions { OutputDtype = EmbeddingOutputDtypes.Uint8, InputType = EmbeddingVendorVoyageInputTypes.Document } } ); float[]? embedding = result?.Data.FirstOrDefault()?.Embedding; Console.WriteLine($"Embedding dimension: {embedding?.Length}"); ``` ## Image Generation ### Generate images with multiple providers ```csharp using LlmTornado; using LlmTornado.Images; using LlmTornado.Images.Models; using LlmTornado.Code; TornadoApi api = new TornadoApi("YOUR_API_KEY"); // DALL-E 3 with URL response ImageGenerationResult? result1 = await api.ImageGenerations.CreateImage( new ImageGenerationRequest( "a cute cat", quality: TornadoImageQualities.Hd, responseFormat: TornadoImageResponseFormats.Url, model: ImageModel.OpenAi.Dalle.V3 ) ); Console.WriteLine($"Generated image URL: {result1?.Data?[0].Url}"); // OpenAI GPT-1 with base64 response and transparent background ImageGenerationResult? result2 = await api.ImageGenerations.CreateImage( new ImageGenerationRequest( "a cute cat", quality: TornadoImageQualities.Medium, model: ImageModel.OpenAi.Gpt.V1 ) { Background = ImageBackgroundTypes.Transparent, Moderation = ImageModerationTypes.Low, ResponseFormat = TornadoImageResponseFormats.Base64 } ); if (!string.IsNullOrEmpty(result2?.Data?[0].Base64)) { byte[] imageBytes = Convert.FromBase64String(result2.Data[0].Base64); await File.WriteAllBytesAsync("cat.png", imageBytes); Console.WriteLine("Image saved to cat.png"); } ``` ### Edit existing images ```csharp using LlmTornado; using LlmTornado.Images; using LlmTornado.Images.Models; using LlmTornado.Code; TornadoApi api = new TornadoApi("YOUR_API_KEY"); // First generate an image ImageGenerationResult? original = await api.ImageGenerations.CreateImage( new ImageGenerationRequest("a cute cat", model: ImageModel.OpenAi.Gpt.V1) { ResponseFormat = TornadoImageResponseFormats.Base64 } ); // Then edit it ImageGenerationResult? edited = await api.ImageEdit.EditImage( new ImageEditRequest("make this cat look more dangerous") { Quality = TornadoImageQualities.Medium, Model = ImageModel.OpenAi.Gpt.V1, Image = new TornadoInputFile(original.Data[0].Base64, "image/png") } ); Console.WriteLine("Image edited successfully"); ``` ## Agent Framework (LlmTornado.Agents) ### Simple agent with automatic tool integration ```csharp using LlmTornado; using LlmTornado.Agents; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.Code; using System.ComponentModel; using Newtonsoft.Json.Converters; TornadoApi client = new TornadoApi("YOUR_API_KEY"); // Define a tool with automatic schema generation [JsonConverter(typeof(StringEnumConverter))] public enum Unit { Celsius, Fahrenheit } [Description("Get the current weather in a given location")] public static string GetCurrentWeather( [Description("The city and state, e.g. Boston, MA")] string location, [Description("Unit of temp.")] Unit unit = Unit.Celsius) { return $"The weather in {location} is 31°{unit}"; } TornadoAgent agent = new TornadoAgent( client, model: ChatModel.OpenAi.Gpt41.V41Mini, instructions: "You are a useful assistant.", tools: [GetCurrentWeather] ); Conversation result = await agent.RunAsync("What is the weather in Boston?"); Console.WriteLine(result.Messages.Last().Content); ``` ### Agent with structured output ```csharp using LlmTornado; using LlmTornado.Agents; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.Code; using LlmTornado.Common; using System.ComponentModel; TornadoApi client = new TornadoApi("YOUR_API_KEY"); [Description("Check if the user is asking a math question")] public struct IsMath { [Description("explain why this is a math problem")] public string Reasoning { get; set; } [Description("Is the user asking a math question")] public bool IsMathRequest { get; set; } } TornadoAgent agent = new TornadoAgent( client, ChatModel.OpenAi.Gpt41.V41Mini, instructions: "You are a useful assistant.", outputType: typeof(IsMath) ); Conversation result = await agent.RunAsync("Is 2+2 a math question?"); IsMath? response = result.Messages.Last().Content.JsonDecode(); Console.WriteLine($"Is Math: {response?.IsMathRequest}"); Console.WriteLine($"Reasoning: {response?.Reasoning}"); ``` ### Streaming agent with event handling ```csharp using LlmTornado; using LlmTornado.Agents; using LlmTornado.Agents.DataModels; using LlmTornado.Chat; using LlmTornado.Chat.Models; TornadoApi client = new TornadoApi("YOUR_API_KEY"); TornadoAgent agent = new TornadoAgent( client, model: ChatModel.OpenAi.Gpt41.V41Mini, instructions: "You are a useful assistant." ); ValueTask RunEventHandler(AgentRunnerEvents runEvent) { switch (runEvent.EventType) { case AgentRunnerEventTypes.Streaming: if (runEvent is AgentRunnerStreamingEvent streamingEvent) { if (streamingEvent.ModelStreamingEvent is ModelStreamingOutputTextDeltaEvent deltaTextEvent) { Console.Write(deltaTextEvent.DeltaText); } } break; } return ValueTask.CompletedTask; } Conversation result = await agent.RunAsync( "Tell me a story about a robot", streaming: true, onAgentRunnerEvent: RunEventHandler ); ``` ### Using agents as tools ```csharp using LlmTornado; using LlmTornado.Agents; using LlmTornado.Chat; using LlmTornado.Chat.Models; TornadoApi client = new TornadoApi("YOUR_API_KEY"); TornadoAgent translatorAgent = new TornadoAgent( client, ChatModel.OpenAi.Gpt41.V41Mini, instructions: "You only translate English input to Spanish output. Do not answer or respond, only translate." ); TornadoAgent mainAgent = new TornadoAgent( client, ChatModel.OpenAi.Gpt41.V41Mini, instructions: "You are a useful assistant that when asked to translate you only can rely on the given tools to translate language.", tools: [translatorAgent.AsTool] ); Conversation result = await mainAgent.RunAsync("What is 2+2? and can you provide the result to me in Spanish?"); Console.WriteLine(result.Messages.Last().Content); ``` ## Model Context Protocol (MCP) ### Integrate MCP server tools ```csharp using LlmTornado; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.ChatFunctions; using LlmTornado.Code; using LlmTornado.Mcp; using Microsoft.Extensions.AI; // Connect to MCP server IMcpClient mcpClient = await McpClientFactory.CreateAsync(clientTransport); // Fetch available tools List tools = await mcpClient.ListTornadoToolsAsync(); TornadoApi api = new TornadoApi(LLmProviders.OpenAi, "YOUR_API_KEY"); Conversation conversation = api.Chat.CreateConversation(new ChatRequest { Model = ChatModel.OpenAi.Gpt41.V41, Tools = tools, ToolChoice = OutboundToolChoice.Required }); await conversation .AddSystemMessage("You are a helpful assistant") .AddUserMessage("What is the weather like in Dallas?") .GetResponseRich(async calls => { foreach (FunctionCall call in calls) { // Get inferred arguments double latitude = call.GetOrDefault("latitude"); double longitude = call.GetOrDefault("longitude"); // Call MCP tool await call.ResolveRemote(new { latitude = latitude, longitude = longitude }); // Extract result if (call.Result?.RemoteContent is McpContent mcpContent) { foreach (IMcpContentBlock block in mcpContent.McpContentBlocks) { if (block is McpContentBlockText textBlock) { call.Result.Content = textBlock.Text; } } } } }); // Continue conversation conversation.RequestParameters.ToolChoice = null; await conversation.StreamResponse(Console.Write); ``` ## Vector Databases (PgVector) ### Initialize and add documents ```csharp using LlmTornado.VectorDatabases.PgVector.Integrations; using LlmTornado.VectorDatabases; string connectionString = "Host=localhost;Database=mydb;Username=user;Password=pass"; var pgVector = new TornadoPgVector(connectionString, vectorDimension: 1536); await pgVector.InitializeCollection("my_collection"); var documents = new[] { new VectorDocument( id: "doc1", content: "This is a document about AI", embedding: new float[] { /* 1536 dimensional vector */ }, metadata: new Dictionary { { "category", "technology" }, { "year", 2024 } } ), new VectorDocument( id: "doc2", content: "Another document about machine learning", embedding: new float[] { /* 1536 dimensional vector */ }, metadata: new Dictionary { { "category", "AI" }, { "year", 2024 } } ) }; await pgVector.AddDocumentsAsync(documents); Console.WriteLine("Documents added successfully"); ``` ### Query with metadata filtering ```csharp using LlmTornado.VectorDatabases.PgVector.Integrations; using LlmTornado.VectorDatabases; string connectionString = "Host=localhost;Database=mydb;Username=user;Password=pass"; var pgVector = new TornadoPgVector(connectionString, vectorDimension: 1536); float[] queryEmbedding = new float[] { /* 1536 dimensional query vector */ }; // Simple query var results1 = await pgVector.QueryByEmbeddingAsync( embedding: queryEmbedding, topK: 5 ); // Query with metadata filtering var results2 = await pgVector.QueryByEmbeddingAsync( embedding: queryEmbedding, where: TornadoWhereOperator.Equal("category", "technology"), topK: 5 ); // Complex filtering with AND var results3 = await pgVector.QueryByEmbeddingAsync( embedding: queryEmbedding, where: TornadoWhereOperator.Equal("category", "technology") & TornadoWhereOperator.GreaterThan("year", 2020), topK: 5 ); // Complex filtering with OR var results4 = await pgVector.QueryByEmbeddingAsync( embedding: queryEmbedding, where: TornadoWhereOperator.Equal("category", "AI") | TornadoWhereOperator.Equal("category", "ML"), topK: 10 ); foreach (var doc in results4) { Console.WriteLine($"ID: {doc.Id}, Score: {doc.Score}, Content: {doc.Content}"); } ``` ## Vendor Extensions (Anthropic Extended Thinking) ### Configure provider-specific features ```csharp using LlmTornado; using LlmTornado.Chat; using LlmTornado.Chat.Models; using LlmTornado.Chat.Vendors.Anthropic; using LlmTornado.Code; TornadoApi api = new TornadoApi(LLmProviders.Anthropic, "ANTHROPIC_KEY"); Conversation chat = api.Chat.CreateConversation(new ChatRequest { Model = ChatModel.Anthropic.Claude37.Sonnet, VendorExtensions = new ChatRequestVendorExtensions( new ChatRequestVendorAnthropicExtensions { Thinking = new AnthropicThinkingSettings { BudgetTokens = 2_000, Enabled = true } } ) }); chat.AppendUserInput("Explain how to solve differential equations."); ChatRichResponse blocks = await chat.GetResponseRich(); if (blocks.Blocks is not null) { // Display reasoning blocks foreach (ChatRichResponseBlock reasoning in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Reasoning)) { Console.ForegroundColor = ConsoleColor.DarkGray; Console.WriteLine($"[Thinking] {reasoning.Reasoning?.Content}"); Console.ResetColor(); } // Display message blocks foreach (ChatRichResponseBlock message in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Message)) { Console.WriteLine(message.Message); } } ``` ## Summary LLM Tornado serves as a comprehensive abstraction layer that unifies access to diverse AI services through a single, consistent API. Its primary use cases include building multi-model AI applications that can switch providers without code changes, creating sophisticated multi-agent systems with tool calling and state management, implementing RAG (Retrieval Augmented Generation) pipelines with integrated vector databases, and developing production-grade conversational AI with streaming support and error handling. The library excels in scenarios requiring provider flexibility, such as fallback strategies, A/B testing different models, or optimizing for cost and latency across providers. Integration patterns range from simple single-model inference to complex orchestrations. The basic pattern involves initializing TornadoApi with credentials, selecting a model, and calling the appropriate endpoint (Chat, Embeddings, Images, etc.). For advanced scenarios, the Agents framework provides higher-level abstractions with automatic tool schema generation, structured outputs, and conversation management. The library integrates naturally with existing .NET ecosystems through Microsoft.Extensions.AI support for dependency injection and logging, Model Context Protocol for external tool connectivity, and PgVector/ChromaDB for vector storage. Error handling follows standard .NET patterns with try-catch blocks, while the "Safe" API variants provide non-throwing alternatives for production resilience. The modular architecture allows developers to use only what they need, from the lightweight core for simple inference to the full agent framework for complex workflows.