Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Add Docs
AI Samples for .NET
https://github.com/dotnet/ai-samples
Admin
A collection of .NET samples demonstrating how to use AI services, language models, and embeddings
...
Tokens:
34,343
Snippets:
460
Trust Score:
8.3
Update:
2 weeks ago
Context
Skills
Chat
Benchmark
91.5
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# AI Samples for .NET AI Samples for .NET is the official Microsoft repository of sample applications demonstrating how to integrate AI services into .NET applications. It covers the full spectrum of AI development patterns—from basic chat completion to retrieval-augmented generation (RAG), evaluation, and production-ready Web APIs—using `Microsoft.Extensions.AI`, the Azure OpenAI SDK, OpenAI SDK, Semantic Kernel, and Ollama. The core of the repository is built around `Microsoft.Extensions.AI`, a unified abstraction layer that allows developers to swap AI providers (Azure OpenAI, OpenAI, Ollama, Azure AI Inference) without changing application code. Samples are organized by category: quickstarts, chat applications, LLM evaluation, vector search, and conference tutorials from Build 2024. All patterns use modern .NET 9 conventions including dependency injection, `IAsyncEnumerable` streaming, and the middleware builder pattern. --- ## Basic Chat Completion `IChatClient.GetResponseAsync` sends a prompt to any supported AI provider and returns a complete response. The same interface works across Azure OpenAI, OpenAI, and Ollama by swapping the underlying client. ```csharp // Azure OpenAI using Azure.AI.OpenAI; using Azure.Identity; using Microsoft.Extensions.AI; IChatClient client = new AzureOpenAIClient( new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")), new DefaultAzureCredential()) .AsChatClient(modelId: "gpt-4o-mini"); Console.WriteLine(await client.GetResponseAsync("What is AI?")); // Output: "AI stands for Artificial Intelligence..." // OpenAI IChatClient openAIClient = new OpenAIClient(Environment.GetEnvironmentVariable("OPENAI_API_KEY")) .AsChatClient("gpt-4o-mini"); Console.WriteLine(await openAIClient.GetResponseAsync("What is AI?")); // Ollama (local models) IChatClient ollamaClient = new OllamaChatClient("http://localhost:11434/", modelId: "llama3.1"); Console.WriteLine(await ollamaClient.GetResponseAsync("What is AI?")); ``` --- ## Streaming Responses `IChatClient.GetStreamingResponseAsync` returns an `IAsyncEnumerable<ChatResponseUpdate>` so tokens are printed as they arrive, suitable for interactive CLI or Web applications. ```csharp using Microsoft.Extensions.AI; using Azure.AI.OpenAI; using Azure.Identity; IChatClient client = new AzureOpenAIClient( new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")), new DefaultAzureCredential()) .AsChatClient("gpt-4o-mini"); await foreach (var update in client.GetStreamingResponseAsync("Explain the water cycle in detail.")) { Console.Write(update.Text); } Console.WriteLine(); // Output streams token by token: "The water cycle, also known as..." ``` --- ## Conversation History (Multi-Turn Chat) Maintain stateful conversations by building a `List<ChatMessage>` that accumulates both user and assistant turns and passing it to each call. ```csharp using Microsoft.Extensions.AI; using Azure.AI.OpenAI; using Azure.Identity; IChatClient chatClient = new AzureOpenAIClient( new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")), new DefaultAzureCredential()) .AsChatClient("gpt-4o-mini"); List<ChatMessage> chatHistory = [ new(ChatRole.System, "You are a friendly hiking enthusiast who helps people discover fun hikes.") ]; while (true) { Console.Write("You: "); var userInput = Console.ReadLine()!; chatHistory.Add(new ChatMessage(ChatRole.User, userInput)); Console.Write("AI: "); var response = ""; await foreach (var item in chatClient.GetStreamingResponseAsync(chatHistory)) { Console.Write(item.Text); response += item.Text; } chatHistory.Add(new ChatMessage(ChatRole.Assistant, response)); Console.WriteLine(); } ``` --- ## Tool Calling / Function Invocation `AIFunctionFactory.Create` wraps a C# delegate as a tool, and `.UseFunctionInvocation()` in the builder pipeline automatically invokes local functions when the model requests them. ```csharp using System.ComponentModel; using Microsoft.Extensions.AI; using Azure.AI.OpenAI; using Azure.Identity; // Define tool using a local function with description attribute [Description("Gets the current weather for a given location")] string GetCurrentWeather(string location, string unit = "celsius") => $"Weather in {location}: Periods of rain, 15°{unit[0].ToString().ToUpper()}"; var chatOptions = new ChatOptions { Tools = [AIFunctionFactory.Create(GetCurrentWeather)] }; IChatClient client = new AzureOpenAIClient( new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")), new DefaultAzureCredential()) .AsChatClient("gpt-4o-mini") .AsBuilder() .UseFunctionInvocation() // middleware: auto-calls tools and re-submits .Build(); var response = await client.GetResponseAsync( "Do I need an umbrella in Montreal today?", chatOptions); Console.WriteLine(response); // Output: "Yes, you should bring an umbrella! Montreal is experiencing periods of rain..." ``` --- ## Middleware Pipeline (Builder Pattern) `.AsBuilder()` composes reusable middleware for function invocation, distributed caching, OpenTelemetry tracing, and logging in a declarative pipeline. ```csharp using System.ComponentModel; using Microsoft.Extensions.AI; using Microsoft.Extensions.Caching.Distributed; using Microsoft.Extensions.Caching.Memory; using Microsoft.Extensions.Options; using OpenTelemetry.Trace; // Set up distributed cache IDistributedCache cache = new MemoryDistributedCache( Options.Create(new MemoryDistributedCacheOptions())); // Set up OpenTelemetry var sourceName = "MyApp.AI"; var tracerProvider = OpenTelemetry.Sdk.CreateTracerProviderBuilder() .AddSource(sourceName) .AddConsoleExporter() .Build(); [Description("Gets the weather")] string GetWeather() => Random.Shared.NextDouble() > 0.5 ? "It's sunny" : "It's raining"; IChatClient client = new SampleChatClient(new Uri("http://coolsite.ai"), "my-model") .AsBuilder() .UseFunctionInvocation() // 1. resolve tool calls .UseOpenTelemetry( // 2. trace with OTEL sourceName: sourceName, configure: o => o.EnableSensitiveData = true) .UseDistributedCache(cache) // 3. cache identical prompts .Build(); var chatOptions = new ChatOptions { Tools = [AIFunctionFactory.Create(GetWeather)] }; Console.WriteLine(await client.GetResponseAsync("Do I need an umbrella?", chatOptions)); ``` --- ## Dependency Injection Integration Register `IChatClient` and `IEmbeddingGenerator` through the DI container using `AddChatClient` and chain middleware directly on the service registration. ```csharp using Microsoft.Extensions.AI; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using Azure.AI.OpenAI; using Azure.Identity; var builder = Host.CreateApplicationBuilder(args); // Register Azure OpenAI client builder.Services.AddSingleton( new AzureOpenAIClient( new Uri(builder.Configuration["AZURE_OPENAI_ENDPOINT"]!), new DefaultAzureCredential())); // Register IChatClient with caching middleware builder.Services.AddDistributedMemoryCache(); builder.Services.AddChatClient(services => services.GetRequiredService<AzureOpenAIClient>().AsChatClient("gpt-4o-mini")) .UseDistributedCache(); // Register IEmbeddingGenerator builder.Services.AddEmbeddingGenerator(services => services.GetRequiredService<AzureOpenAIClient>() .AsEmbeddingGenerator("text-embedding-3-small")); var app = builder.Build(); var chatClient = app.Services.GetRequiredService<IChatClient>(); Console.WriteLine(await chatClient.GetResponseAsync("What is .NET?")); ``` --- ## ASP.NET Core Web API with AI Expose `IChatClient` and `IEmbeddingGenerator` as HTTP endpoints in a minimal Web API, pulling configuration from `appsettings.json`. ```csharp using Azure.AI.OpenAI; using Azure.Identity; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.AI; var builder = WebApplication.CreateBuilder(args); builder.Services.AddSingleton( new AzureOpenAIClient( new Uri(builder.Configuration["AI:AzureOpenAI:Endpoint"]!), new DefaultAzureCredential())); builder.Services.AddChatClient(services => services.GetRequiredService<AzureOpenAIClient>() .AsChatClient(builder.Configuration["AI:AzureOpenAI:Chat:ModelId"] ?? "gpt-4o-mini")); builder.Services.AddEmbeddingGenerator(services => services.GetRequiredService<AzureOpenAIClient>() .AsEmbeddingGenerator(builder.Configuration["AI:AzureOpenAI:Embedding:ModelId"] ?? "text-embedding-3-small")); var app = builder.Build(); // POST /chat body: "Your question here" app.MapPost("/chat", async (IChatClient client, [FromBody] string message) => await client.GetResponseAsync(message)); // POST /embedding body: "Text to embed" app.MapPost("/embedding", async (IEmbeddingGenerator<string, Embedding<float>> generator, [FromBody] string message) => await generator.GenerateEmbeddingAsync(message)); app.Run(); // curl -X POST http://localhost:5000/chat -H "Content-Type: application/json" -d '"What is .NET?"' // curl -X POST http://localhost:5000/embedding -H "Content-Type: application/json" -d '"Hello world"' ``` --- ## Text Embeddings `IEmbeddingGenerator<string, Embedding<float>>` generates vector embeddings for semantic search, clustering, or RAG pipelines from Azure OpenAI or any provider. ```csharp using Microsoft.Extensions.AI; using Azure.AI.OpenAI; using Azure.Identity; IEmbeddingGenerator<string, Embedding<float>> generator = new AzureOpenAIClient( new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")), new DefaultAzureCredential()) .AsEmbeddingGenerator("text-embedding-3-small"); // Single embedding var singleVector = await generator.GenerateEmbeddingVectorAsync("What is AI?"); Console.WriteLine($"Vector dimensions: {singleVector.Length}"); // Output: Vector dimensions: 1536 // Batch embeddings var embeddings = await generator.GenerateAsync(["What is AI?", "What is .NET?", "Hello world"]); foreach (var embedding in embeddings) { Console.WriteLine(string.Join(", ", embedding.Vector.ToArray()[..5]) + "..."); } // Output: -0.012, 0.034, 0.001, -0.067, 0.019 ... ``` --- ## Custom IChatClient Implementation Implement `IChatClient` to create custom AI backends, mock clients for testing, or proxy clients with custom logic by extending `DelegatingChatClient` or implementing the interface directly. ```csharp using System.Runtime.CompilerServices; using Microsoft.Extensions.AI; // Full custom client (e.g., for testing or custom backends) public class SampleChatClient : IChatClient { private readonly Uri _endpoint; private readonly string _modelId; private readonly ChatClientMetadata _metadata; public SampleChatClient(Uri endpoint, string modelId) { _endpoint = endpoint; _modelId = modelId; _metadata = new ChatClientMetadata("SampleChatClient", endpoint, modelId); } public async Task<ChatResponse> GetResponseAsync( IEnumerable<ChatMessage> messages, ChatOptions? options = null, CancellationToken cancellationToken = default) { await Task.Delay(100, cancellationToken); // simulate latency return new ChatResponse(new ChatMessage(ChatRole.Assistant, "This is a simulated response.")); } public async IAsyncEnumerable<ChatResponseUpdate> GetStreamingResponseAsync( IEnumerable<ChatMessage> messages, ChatOptions? options = null, [EnumeratorCancellation] CancellationToken cancellationToken = default) { yield return new ChatResponseUpdate(ChatRole.Assistant, "Streaming "); await Task.Delay(100, cancellationToken); yield return new ChatResponseUpdate(ChatRole.Assistant, "response."); } public object? GetService(Type serviceType, object? key = null) => serviceType == typeof(ChatClientMetadata) ? _metadata : serviceType?.IsInstanceOfType(this) is true ? this : null; public void Dispose() { } } // Delegating client for middleware (logging example) public class LoggingChatClient(IChatClient innerClient, ILogger<LoggingChatClient> logger) : DelegatingChatClient(innerClient) { public override async Task<ChatResponse> GetResponseAsync( IEnumerable<ChatMessage> messages, ChatOptions? options = null, CancellationToken cancellationToken = default) { logger.LogInformation("Chat request: {Message}", messages.Last().Text); var response = await base.GetResponseAsync(messages, options, cancellationToken); logger.LogInformation("Chat response: {Response}", response.Message.Text); return response; } } ``` --- ## Semantic Kernel: Kernel Setup and Prompts `Kernel.CreateBuilder()` bootstraps Semantic Kernel with an OpenAI or Azure OpenAI chat completion backend. Use `kernel.InvokePromptAsync` for zero-memory single-turn completions. ```csharp using Microsoft.SemanticKernel; var kernel = Kernel.CreateBuilder() .AddOpenAIChatCompletion( modelId: "gpt-4o-mini", apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!) .Build(); // Single-turn prompt var answer = await kernel.InvokePromptAsync("What is the capital of France?"); Console.WriteLine(answer); // Output: The capital of France is Paris. // Interactive loop (stateless) while (true) { Console.Write("Q: "); Console.WriteLine(await kernel.InvokePromptAsync(Console.ReadLine()!)); } ``` --- ## Semantic Kernel: Chat History Use `IChatCompletionService` with a `ChatHistory` list to maintain multi-turn context in Semantic Kernel-based applications. ```csharp using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.ChatCompletion; var kernel = Kernel.CreateBuilder() .AddOpenAIChatCompletion("gpt-4o-mini", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!) .Build(); var chatService = kernel.GetRequiredService<IChatCompletionService>(); ChatHistory chatHistory = []; chatHistory.AddSystemMessage("You are a helpful AI assistant specializing in .NET development."); while (true) { Console.Write("Q: "); chatHistory.AddUserMessage(Console.ReadLine()!); var response = await chatService.GetChatMessageContentAsync(chatHistory); Console.WriteLine($"A: {response}"); chatHistory.Add(response); } ``` --- ## Semantic Kernel: Plugins and Auto Function Calling `kernel.ImportPluginFromType<T>()` registers a class as a plugin. `ToolCallBehavior.AutoInvokeKernelFunctions` makes the kernel automatically call matching functions when the model requests them. ```csharp using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.ChatCompletion; using Microsoft.SemanticKernel.Connectors.OpenAI; var kernel = Kernel.CreateBuilder() .AddOpenAIChatCompletion("gpt-4o-mini", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!) .Build(); // Register plugin kernel.ImportPluginFromType<WeatherPlugin>(); var settings = new OpenAIPromptExecutionSettings { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions }; var chatService = kernel.GetRequiredService<IChatCompletionService>(); ChatHistory history = []; history.AddUserMessage("What's the weather like in Seattle?"); var response = await chatService.GetChatMessageContentAsync(history, settings, kernel); Console.WriteLine(response); // Output: "The weather in Seattle is currently 15°C with partly cloudy skies." // Plugin definition public class WeatherPlugin { [KernelFunction, Description("Gets current weather for a city")] public string GetWeather( [Description("The city name")] string city) => city switch { "Seattle" => "15°C, partly cloudy", "New York" => "22°C, sunny", _ => "18°C, clear" }; } ``` --- ## Semantic Kernel: Function Invocation Filters `IFunctionInvocationFilter` intercepts kernel function calls before and after execution—useful for permission checks, auditing, throttling, or result post-processing. ```csharp using Microsoft.Extensions.DependencyInjection; using Microsoft.SemanticKernel; var builder = Kernel.CreateBuilder() .AddOpenAIChatCompletion("gpt-4o-mini", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!); // Register the filter via DI builder.Services.AddSingleton<IFunctionInvocationFilter, AuditFilter>(); var kernel = builder.Build(); kernel.ImportPluginFromType<WeatherPlugin>(); var result = await kernel.InvokePromptAsync("What is the weather in Paris?"); Console.WriteLine(result); // Console also shows: "[AUDIT] Invoking: GetWeather at 2024-01-15 10:30:00" public class AuditFilter : IFunctionInvocationFilter { public async Task OnFunctionInvocationAsync( FunctionInvocationContext context, Func<FunctionInvocationContext, Task> next) { Console.WriteLine($"[AUDIT] Invoking: {context.Function.Name} at {DateTime.UtcNow}"); await next(context); Console.WriteLine($"[AUDIT] Completed: {context.Function.Name}"); } } ``` --- ## Semantic Kernel in ASP.NET Core Web API `builder.Services.AddKernel()` registers Semantic Kernel with the .NET DI container so `Kernel` can be injected directly into route handlers or controllers. ```csharp using Microsoft.SemanticKernel; var builder = WebApplication.CreateBuilder(args); builder.Services.AddKernel() .AddOpenAIChatCompletion("gpt-4o-mini", Environment.GetEnvironmentVariable("OPENAI_API_KEY")!); var app = builder.Build(); app.MapGet("/forecast", async (Kernel kernel) => { int tempC = Random.Shared.Next(-10, 40); var description = await kernel.InvokePromptAsync<string>( $"Give a one-sentence weather description for {tempC}°C."); return new WeatherForecast(DateOnly.FromDateTime(DateTime.Now), tempC, description!); }); app.Run(); // curl http://localhost:5000/forecast // {"date":"2024-01-15","temperatureC":18,"summary":"A pleasant mild day perfect for outdoor activities.","temperatureF":64} record WeatherForecast(DateOnly Date, int TemperatureC, string Summary) { public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); } ``` --- ## PDF Ingestion and Chunking for RAG Use `TextChunker.SplitPlainTextParagraphs` to split PDF content into fixed-size chunks, then embed each chunk with `IEmbeddingGenerator` and persist to disk for later vector search. ```csharp using Microsoft.Extensions.AI; using Microsoft.SemanticKernel.Text; using UglyToad.PdfPig; using System.Text.Json; public class ManualIngestor(IEmbeddingGenerator<string, Embedding<float>> embeddingGenerator) { public async Task RunAsync(string sourceDir, string outputDir) { var chunks = new List<ManualChunk>(); int paragraphIndex = 0; foreach (var file in Directory.GetFiles(sourceDir, "*.pdf")) { using var pdf = PdfDocument.Open(file); var docId = Path.GetFileNameWithoutExtension(file); foreach (var page in pdf.GetPages()) { // Chunk the page into ~200-token paragraphs var paragraphs = TextChunker.SplitPlainTextParagraphs([page.Text], 200); // Batch-embed all paragraphs var withEmbeddings = await embeddingGenerator.GenerateAndZipAsync(paragraphs); chunks.AddRange(withEmbeddings.Select(p => new ManualChunk { ProductId = docId, PageNumber = page.Number, ChunkId = ++paragraphIndex, Text = p.Value, Embedding = p.Embedding.Vector.ToArray() })); } } await File.WriteAllTextAsync( Path.Combine(outputDir, "manual-chunks.json"), JsonSerializer.Serialize(chunks)); Console.WriteLine($"Ingested {chunks.Count} chunks from {sourceDir}"); } } public record ManualChunk { public string ProductId { get; set; } = ""; public int PageNumber { get; set; } public int ChunkId { get; set; } public string Text { get; set; } = ""; public float[] Embedding { get; set; } = []; } ``` --- ## Vector Store and Semantic Search `IVectorStore` and `IVectorStoreRecordCollection<TKey, TRecord>` provide provider-agnostic vector search with filtering, used to retrieve relevant document chunks in RAG scenarios. ```csharp using Microsoft.Extensions.AI; using Microsoft.Extensions.VectorData; public class ProductManualService( IEmbeddingGenerator<string, Embedding<float>> embeddingGenerator, IVectorStore vectorStore) { private readonly IVectorStoreRecordCollection<int, ManualChunk> _collection = vectorStore.GetCollection<int, ManualChunk>("manuals", GetRecordDefinition()); public async Task<List<ManualChunk>> SearchAsync(string query, string? productId = null, int limit = 5) { // Embed the user query var queryVector = await embeddingGenerator.GenerateEmbeddingVectorAsync(query); var searchOptions = new VectorSearchOptions<ManualChunk> { Top = limit, Filter = productId != null ? new VectorSearchFilter().EqualTo(nameof(ManualChunk.ProductId), productId) : null }; var results = await _collection.VectorizedSearchAsync(queryVector, searchOptions); var chunks = new List<ManualChunk>(); await foreach (var result in results.Results) chunks.Add(result.Record); return chunks; } private static VectorStoreRecordDefinition GetRecordDefinition() => new() { Properties = [ new VectorStoreRecordKeyProperty(nameof(ManualChunk.ChunkId), typeof(int)), new VectorStoreRecordDataProperty(nameof(ManualChunk.ProductId), typeof(string)) { IsFilterable = true }, new VectorStoreRecordDataProperty(nameof(ManualChunk.Text), typeof(string)), new VectorStoreRecordVectorProperty(nameof(ManualChunk.Embedding), typeof(ReadOnlyMemory<float>)) { Dimensions = 1536, DistanceFunction = DistanceFunction.CosineDistance } ] }; } ``` --- ## LLM Quality Evaluation with Microsoft.Extensions.AI.Evaluation `IEvaluator` implementations from `Microsoft.Extensions.AI.Evaluation.Quality` score LLM responses on coherence, groundedness, relevance, and fluency using another LLM as a judge. ```csharp using Microsoft.Extensions.AI; using Microsoft.Extensions.AI.Evaluation; using Microsoft.Extensions.AI.Evaluation.Quality; // Configure which model acts as the evaluator/judge var chatConfig = new ChatConfiguration(evaluatorChatClient); // Build messages and get a model response to evaluate var messages = new List<ChatMessage> { new(ChatRole.System, "You are a helpful assistant."), new(ChatRole.User, "What causes the Northern Lights?") }; var response = await productionChatClient.GetResponseAsync(messages); // Run built-in evaluators IEvaluator evaluator = new CompositeEvaluator( new CoherenceEvaluator(), new GroundednessEvaluator(), new RelevanceEvaluator(), new FluencyEvaluator()); EvaluationResult result = await evaluator.EvaluateAsync(messages, response, chatConfig); // Inspect metrics var coherence = result.Get<NumericMetric>(CoherenceEvaluator.CoherenceMetricName); var relevance = result.Get<NumericMetric>(RelevanceEvaluator.RelevanceMetricName); Console.WriteLine($"Coherence : {coherence.Value}/5 ({coherence.Interpretation?.Rating})"); Console.WriteLine($"Relevance : {relevance.Value}/5 ({relevance.Interpretation?.Rating})"); // Output: // Coherence : 4.5/5 (Exceptional) // Relevance : 4.0/5 (Good) ``` --- ## Custom Evaluators Implement `IEvaluator` to create domain-specific quality metrics beyond the built-in ones, then compose them with `CompositeEvaluator`. ```csharp using Microsoft.Extensions.AI.Evaluation; // Custom evaluator: count words in the response public class WordCountEvaluator : IEvaluator { public const string WordCountMetricName = "WordCount"; public IReadOnlyCollection<string> EvaluationMetricNames => [WordCountMetricName]; public Task<EvaluationResult> EvaluateAsync( IEnumerable<ChatMessage> messages, ChatResponse response, ChatConfiguration? chatConfig = null, IEnumerable<EvaluationContext>? additionalContext = null, CancellationToken cancellationToken = default) { var wordCount = response.Message.Text?.Split(' ', StringSplitOptions.RemoveEmptyEntries).Length ?? 0; var metric = new NumericMetric(WordCountMetricName, wordCount); metric.Interpretation = wordCount switch { < 10 => new EvaluationMetricInterpretation(EvaluationRating.Poor, failed: true, reason: "Response too short"), < 50 => new EvaluationMetricInterpretation(EvaluationRating.Good, failed: false, reason: "Concise response"), < 200 => new EvaluationMetricInterpretation(EvaluationRating.Exceptional, failed: false, reason: "Detailed response"), _ => new EvaluationMetricInterpretation(EvaluationRating.Average, failed: false, reason: "Very long response") }; return Task.FromResult(new EvaluationResult(metric)); } } // Usage with composite evaluator IEvaluator combined = new CompositeEvaluator( new CoherenceEvaluator(), new WordCountEvaluator()); var result = await combined.EvaluateAsync(messages, response, chatConfig); var wordCount = result.Get<NumericMetric>(WordCountEvaluator.WordCountMetricName); Console.WriteLine($"Word count: {wordCount.Value} ({wordCount.Interpretation?.Rating})"); ``` --- ## Evaluation Reporting with Scenario Runs `ScenarioRun` wraps evaluation within a named test case and persists both the model response and evaluation results for cross-run comparison and reporting. ```csharp using Microsoft.Extensions.AI.Evaluation.Reporting; using Microsoft.Extensions.AI.Evaluation.Reporting.Storage; using Microsoft.Extensions.AI.Evaluation.Quality; // Configure storage-backed reporting var reportingConfig = new ReportingConfiguration( chatClient: evaluatorChatClient, evaluators: [new CoherenceEvaluator(), new RelevanceEvaluator()], chatResponseCache: new DiskBasedResponseCache("./eval-cache"), executionName: "nightly-eval-2024-01-15"); // Run a named scenario (responses are cached between runs) await using ScenarioRun run = await reportingConfig.CreateScenarioRunAsync("NorthernLights.Explanation"); var messages = new List<ChatMessage> { new(ChatRole.User, "What causes the Northern Lights?") }; var response = await run.ChatConfiguration!.ChatClient.GetResponseAsync(messages); EvaluationResult result = await run.EvaluateAsync(messages, response); // Retrieve pass/fail summary foreach (var (name, metric) in result.Metrics) { Console.WriteLine($"{name}: {metric.Interpretation?.Rating} - {metric.Interpretation?.Reason}"); } // Output: // Coherence: Exceptional - Response is highly coherent and well-structured // Relevance: Good - Response directly addresses the question ``` --- ## Safety Evaluation with Azure AI Foundry `ViolenceEvaluator`, `HateAndUnfairnessEvaluator`, and `ProtectedMaterialEvaluator` use Azure AI Foundry's Content Safety service to detect harmful content in model responses. ```csharp using Azure.Identity; using Microsoft.Extensions.AI.Evaluation; using Microsoft.Extensions.AI.Evaluation.Reporting; using Microsoft.Extensions.AI.Evaluation.Safety; // Point to your Azure AI Foundry project var safetyConfig = new ContentSafetyServiceConfiguration( credential: new DefaultAzureCredential(), endpoint: new Uri(Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_ENDPOINT")!)); var reportingConfig = new ReportingConfiguration( chatClient: productionChatClient, evaluators: [ ContentSafetyEvaluator.Create<ViolenceEvaluator>(safetyConfig), ContentSafetyEvaluator.Create<HateAndUnfairnessEvaluator>(safetyConfig), ContentSafetyEvaluator.Create<ProtectedMaterialEvaluator>(safetyConfig), ], executionName: "safety-eval"); await using var run = await reportingConfig.CreateScenarioRunAsync("SafetyCheck.General"); var messages = new List<ChatMessage> { new(ChatRole.User, "Tell me about historical conflicts.") }; var response = await run.ChatConfiguration!.ChatClient.GetResponseAsync(messages); var result = await run.EvaluateAsync(messages, response); var violence = result.Get<StringMetric>(ViolenceEvaluator.ViolenceMetricName); Console.WriteLine($"Violence rating: {violence.Value}"); // Output: Violence rating: Very Low ``` --- ## NuGet Package References The key packages used across all samples, with current stable/preview version numbers. ```xml <!-- Core AI abstractions --> <PackageReference Include="Microsoft.Extensions.AI" Version="9.3.0-preview.1.25114.11" /> <PackageReference Include="Microsoft.Extensions.AI.OpenAI" Version="9.3.0-preview.1.25114.11" /> <PackageReference Include="Microsoft.Extensions.AI.Evaluation" Version="9.3.0-preview.1.25161.1" /> <PackageReference Include="Microsoft.Extensions.AI.Evaluation.Quality" Version="9.3.0-preview.1.25161.1" /> <PackageReference Include="Microsoft.Extensions.AI.Evaluation.Reporting" Version="9.3.0-preview.1.25161.1" /> <!-- AI Providers --> <PackageReference Include="Azure.AI.OpenAI" Version="2.1.0-beta.2" /> <PackageReference Include="Azure.AI.Inference" Version="1.0.0-beta.3" /> <PackageReference Include="OllamaSharp" Version="4.0.4" /> <!-- Semantic Kernel --> <PackageReference Include="Microsoft.SemanticKernel" Version="1.30.0" /> <PackageReference Include="Microsoft.SemanticKernel.Connectors.OpenAI" Version="1.30.0" /> <!-- Infrastructure --> <PackageReference Include="Azure.Identity" Version="1.13.1" /> <PackageReference Include="Microsoft.Extensions.Caching.Memory" Version="9.0.0" /> <PackageReference Include="OpenTelemetry.Extensions.Hosting" Version="1.9.0" /> <!-- Vector / RAG --> <PackageReference Include="Microsoft.Extensions.VectorData.Abstractions" Version="9.3.0-preview.1.25114.11" /> <PackageReference Include="UglyToad.PdfPig" Version="0.1.9" /> ``` --- AI Samples for .NET serves as a comprehensive reference for production-grade AI application development across three primary use cases: intelligent chatbots and assistants (using `IChatClient` with conversation history and tool calling), retrieval-augmented generation (PDF ingestion → chunking → embedding → vector search → grounded responses), and automated quality assurance (evaluating LLM outputs for coherence, groundedness, relevance, and safety using `Microsoft.Extensions.AI.Evaluation`). Each pattern is provider-agnostic by design, letting teams switch between Azure OpenAI, OpenAI, Ollama, or custom backends by changing a single registration line. The repository's integration patterns follow a consistent layered architecture: AI providers are registered in the DI container via `AddChatClient`/`AddEmbeddingGenerator`, middleware (caching, tracing, function invocation) is composed with the builder pattern, and application logic consumes only the `IChatClient` or `IEmbeddingGenerator` interfaces. This makes the samples directly adaptable to Web APIs, background services, console tools, or Azure Functions—swap the host, keep the AI logic unchanged. The evaluation framework further supports CI/CD integration by caching model responses between runs and generating structured reports, enabling regression testing of AI quality alongside standard unit tests.