Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
Autogen
https://github.com/microsoft/autogen
Admin
AutoGen is a framework for creating multi-agent AI applications that can act autonomously or work
...
Tokens:
331,903
Snippets:
1,174
Trust Score:
9.9
Update:
1 week ago
Context
Skills
Chat
Benchmark
86.9
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# AutoGen AutoGen is a framework for creating multi-agent AI applications that can act autonomously or work alongside humans. Developed by Microsoft, it provides a layered architecture with the Core API for message passing and event-driven agents, the AgentChat API for rapid prototyping with common multi-agent patterns, and the Extensions API for ecosystem integrations with LLM providers like OpenAI, Anthropic, Azure, and Ollama. The framework enables building sophisticated AI agent systems including single agents with tools, multi-agent teams with various coordination patterns (round-robin, selector-based, swarm), and human-in-the-loop workflows. AutoGen supports streaming responses, structured outputs, memory systems, Model Context Protocol (MCP) integration, and distributed agent runtimes. It is designed for Python 3.10+ and also provides .NET support. ## Installation Install the core AgentChat package and OpenAI model client. ```bash pip install -U "autogen-agentchat" "autogen-ext[openai]" ``` ## AssistantAgent - Creating a Basic AI Agent The `AssistantAgent` is the primary agent class for building AI assistants that can use tools, generate responses, and participate in conversations. It wraps a model client and handles the conversation flow automatically. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: # Create a model client (requires OPENAI_API_KEY environment variable) model_client = OpenAIChatCompletionClient(model="gpt-4o") # Create an assistant agent agent = AssistantAgent( name="assistant", model_client=model_client, system_message="You are a helpful AI assistant. Reply with TERMINATE when done.", ) # Run a task and get the result result = await agent.run(task="What is the capital of France?") print(result.messages[-1].content) # Output: The capital of France is Paris. TERMINATE await model_client.close() asyncio.run(main()) ``` ## AssistantAgent with Tools - Adding Custom Functions Agents can use tools (functions) to perform actions. Define Python functions with type hints and docstrings, and the agent will automatically generate schemas for the LLM to call them. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient async def get_weather(city: str) -> str: """Get the current weather for a city.""" # Simulated weather data return f"The weather in {city} is sunny, 72°F" async def get_stock_price(ticker: str) -> str: """Get the current stock price for a ticker symbol.""" # Simulated stock data prices = {"AAPL": 178.50, "GOOGL": 141.80, "MSFT": 378.90} price = prices.get(ticker.upper(), 100.00) return f"{ticker.upper()} is currently trading at ${price}" async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent = AssistantAgent( name="assistant", model_client=model_client, tools=[get_weather, get_stock_price], # Pass functions directly system_message="You are a helpful assistant with access to weather and stock data.", ) # Stream the response to console await Console(agent.run_stream(task="What's the weather in Seattle and AAPL stock price?")) # Example output: # ---------- assistant ---------- # [FunctionCall(id='call_xxx', arguments='{"city":"Seattle"}', name='get_weather')] # ---------- assistant ---------- # [FunctionExecutionResult(content='The weather in Seattle is sunny, 72°F', ...)] # ---------- assistant ---------- # The weather in Seattle is sunny at 72°F, and AAPL is trading at $178.50. asyncio.run(main()) ``` ## OpenAIChatCompletionClient - Model Client Configuration The `OpenAIChatCompletionClient` provides integration with OpenAI's chat completion API. It supports various models, streaming, parallel tool calls configuration, and custom API endpoints. ```python import asyncio from autogen_ext.models.openai import OpenAIChatCompletionClient, AzureOpenAIChatCompletionClient from autogen_core.models import UserMessage async def main() -> None: # Standard OpenAI client openai_client = OpenAIChatCompletionClient( model="gpt-4o", # api_key="sk-...", # Or set OPENAI_API_KEY env var ) # Azure OpenAI client azure_client = AzureOpenAIChatCompletionClient( model="gpt-4o", azure_endpoint="https://your-resource.openai.azure.com/", api_version="2024-02-15-preview", # azure_ad_token_provider=get_bearer_token_provider(...), ) # Direct model call response = await openai_client.create( messages=[UserMessage(content="Hello!", source="user")] ) print(response.content) # "Hello! How can I assist you today?" print(f"Tokens used: {response.usage.prompt_tokens} + {response.usage.completion_tokens}") # Streaming response async for chunk in openai_client.create_stream( messages=[UserMessage(content="Count to 5", source="user")] ): if isinstance(chunk, str): print(chunk, end="", flush=True) await openai_client.close() asyncio.run(main()) ``` ## RoundRobinGroupChat - Multi-Agent Team with Turn Taking `RoundRobinGroupChat` creates a team where agents take turns in a fixed order. This is useful for workflows where each agent has a specific role in a pipeline. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import TextMentionTermination, MaxMessageTermination from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # Create specialized agents writer = AssistantAgent( name="writer", model_client=model_client, system_message="You are a creative writer. Write content based on the task.", ) reviewer = AssistantAgent( name="reviewer", model_client=model_client, system_message="You are an editor. Review the content and provide feedback. Say APPROVE when satisfied.", ) # Create termination conditions (can combine with | operator) termination = TextMentionTermination("APPROVE") | MaxMessageTermination(max_messages=6) # Create the team team = RoundRobinGroupChat( participants=[writer, reviewer], termination_condition=termination, ) # Run the team with streaming output await Console(team.run_stream(task="Write a haiku about coding.")) # Example output: # ---------- writer ---------- # Silent keystrokes flow # Logic blooms in midnight's glow # Bugs fade, code will grow # ---------- reviewer ---------- # Beautiful haiku! The imagery captures the programming experience well. APPROVE asyncio.run(main()) ``` ## SelectorGroupChat - Dynamic Speaker Selection `SelectorGroupChat` uses an LLM to dynamically select the next speaker based on the conversation context. This enables more natural multi-agent conversations. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import SelectorGroupChat from autogen_agentchat.conditions import TextMentionTermination from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # Create domain expert agents travel_advisor = AssistantAgent( name="travel_advisor", model_client=model_client, description="Helps with overall travel planning and booking.", system_message="You coordinate travel plans. Say TERMINATE when booking is complete.", ) hotel_agent = AssistantAgent( name="hotel_agent", model_client=model_client, description="Specializes in hotel recommendations and bookings.", system_message="You are a hotel specialist. Recommend hotels based on preferences.", ) flight_agent = AssistantAgent( name="flight_agent", model_client=model_client, description="Handles flight searches and bookings.", system_message="You are a flight specialist. Find the best flight options.", ) termination = TextMentionTermination("TERMINATE") # Create selector group chat - model decides who speaks next team = SelectorGroupChat( participants=[travel_advisor, hotel_agent, flight_agent], model_client=model_client, termination_condition=termination, allow_repeated_speaker=False, # Prevent same agent speaking twice in a row ) await Console(team.run_stream(task="Plan a 3-day trip to Tokyo with flights from NYC.")) asyncio.run(main()) ``` ## Swarm - Handoff-Based Agent Orchestration `Swarm` enables agents to explicitly hand off control to other agents using `HandoffMessage`. This is ideal for customer service workflows with escalation paths. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import Swarm from autogen_agentchat.conditions import HandoffTermination, MaxMessageTermination from autogen_agentchat.messages import HandoffMessage from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # First-line support agent that can hand off to specialist or user support_agent = AssistantAgent( name="support", model_client=model_client, handoffs=["specialist", "user"], # Can hand off to these targets system_message="""You are a support agent. Handle basic questions. For complex technical issues, hand off to 'specialist'. If you need more information from the customer, hand off to 'user'.""", ) # Specialist agent for complex issues specialist_agent = AssistantAgent( name="specialist", model_client=model_client, handoffs=["support", "user"], system_message="You are a technical specialist. Solve complex problems.", ) # Termination when handoff to user is requested termination = HandoffTermination(target="user") | MaxMessageTermination(10) # Create swarm - first agent is the starting point team = Swarm( participants=[support_agent, specialist_agent], termination_condition=termination, ) # Initial conversation result = await Console(team.run_stream( task="My database queries are running very slowly." )) # If terminated due to handoff to user, resume with user input if "Handoff to user" in str(result.stop_reason): await Console(team.run_stream( task=HandoffMessage( source="user", target="specialist", # Direct response to specialist content="The database has 10 million rows and no indexes on the query columns." ) )) asyncio.run(main()) ``` ## FunctionTool - Creating Custom Tools `FunctionTool` wraps Python functions to create tools with explicit schemas. Use this when you need more control over tool configuration or when using structured output mode. ```python import asyncio from typing import Annotated from autogen_core import CancellationToken from autogen_core.tools import FunctionTool from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient # Define a function with detailed type annotations async def calculate_mortgage( principal: Annotated[float, "Loan amount in dollars"], annual_rate: Annotated[float, "Annual interest rate as decimal (e.g., 0.05 for 5%)"], years: Annotated[int, "Loan term in years"], ) -> str: """Calculate monthly mortgage payment.""" monthly_rate = annual_rate / 12 num_payments = years * 12 payment = principal * (monthly_rate * (1 + monthly_rate)**num_payments) / ((1 + monthly_rate)**num_payments - 1) return f"Monthly payment: ${payment:,.2f} for a ${principal:,.0f} loan at {annual_rate*100}% over {years} years" async def main() -> None: # Create tool with explicit configuration mortgage_tool = FunctionTool( func=calculate_mortgage, description="Calculate monthly mortgage payment based on loan details", name="mortgage_calculator", # Custom name (optional) strict=True, # Required for structured output mode ) # Verify the tool schema print(f"Tool name: {mortgage_tool.name}") print(f"Schema: {mortgage_tool.schema}") # Use tool directly result = await mortgage_tool.run_json( {"principal": 400000, "annual_rate": 0.065, "years": 30}, CancellationToken() ) print(mortgage_tool.return_value_as_string(result)) # Output: Monthly payment: $2,528.27 for a $400,000 loan at 6.5% over 30 years # Use tool with an agent model_client = OpenAIChatCompletionClient(model="gpt-4o") agent = AssistantAgent( name="financial_advisor", model_client=model_client, tools=[mortgage_tool], ) await Console(agent.run_stream(task="What's the monthly payment for a $500k house at 7% for 30 years?")) asyncio.run(main()) ``` ## AgentTool - Using Agents as Tools `AgentTool` allows you to use an agent as a tool for another agent, enabling hierarchical agent architectures where a main agent can delegate tasks to specialist sub-agents. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.tools import AgentTool from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # Create specialist agents math_expert = AssistantAgent( name="math_expert", model_client=model_client, description="A math expert that can solve mathematical problems.", system_message="Solve math problems step by step. Be concise.", ) writing_expert = AssistantAgent( name="writing_expert", model_client=model_client, description="A writing expert for creative and technical writing.", system_message="Help with writing tasks. Be creative and clear.", ) # Wrap agents as tools math_tool = AgentTool(agent=math_expert, return_value_as_last_message=True) writing_tool = AgentTool(agent=writing_expert, return_value_as_last_message=True) # Create main agent with sub-agents as tools # IMPORTANT: Disable parallel tool calls to avoid concurrency issues main_client = OpenAIChatCompletionClient(model="gpt-4o", parallel_tool_calls=False) orchestrator = AssistantAgent( name="orchestrator", model_client=main_client, tools=[math_tool, writing_tool], system_message="You are a helpful assistant. Use expert tools when needed.", ) # The orchestrator will delegate to appropriate experts await Console(orchestrator.run_stream( task="Calculate the compound interest on $10,000 at 5% for 10 years, then write a short poem about saving money." )) asyncio.run(main()) ``` ## McpWorkbench - Model Context Protocol Integration `McpWorkbench` enables integration with MCP servers, allowing agents to use tools provided by external services like file systems, databases, or web browsers. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_ext.tools.mcp import McpWorkbench, StdioServerParams async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # Configure MCP server (example: fetch server for web requests) # First install: npm install -g @anthropic/mcp-server-fetch server_params = StdioServerParams( command="npx", args=["@anthropic/mcp-server-fetch"], read_timeout_seconds=60, ) # Use McpWorkbench as context manager async with McpWorkbench(server_params=server_params) as workbench: # List available tools from the MCP server tools = await workbench.list_tools() print(f"Available tools: {[t['name'] for t in tools]}") # Create agent with MCP workbench agent = AssistantAgent( name="web_assistant", model_client=model_client, workbench=workbench, # Pass workbench instead of tools reflect_on_tool_use=True, max_tool_iterations=5, ) await Console(agent.run_stream( task="Fetch the content from https://httpbin.org/json and summarize it" )) # For Playwright MCP server (web browsing): # npm install -g @playwright/mcp@latest async def playwright_example() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") server_params = StdioServerParams( command="npx", args=["@playwright/mcp@latest", "--headless"], ) async with McpWorkbench(server_params=server_params) as mcp: agent = AssistantAgent( name="browser_agent", model_client=model_client, workbench=mcp, max_tool_iterations=10, ) await Console(agent.run_stream( task="Go to github.com/microsoft/autogen and tell me how many stars it has" )) asyncio.run(main()) ``` ## Termination Conditions - Controlling Conversation Flow AutoGen provides various termination conditions to control when conversations end. They can be combined using `|` (OR) and `&` (AND) operators. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import ( TextMentionTermination, MaxMessageTermination, TokenUsageTermination, TimeoutTermination, HandoffTermination, StopMessageTermination, SourceMatchTermination, ) from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # TextMentionTermination - stop when specific text appears text_term = TextMentionTermination("DONE") # MaxMessageTermination - stop after N messages max_msg_term = MaxMessageTermination(max_messages=10) # TokenUsageTermination - stop when token limit reached token_term = TokenUsageTermination( max_total_token=4000, max_prompt_token=2000, max_completion_token=2000, ) # TimeoutTermination - stop after duration timeout_term = TimeoutTermination(timeout_seconds=60.0) # HandoffTermination - stop when handoff to specific target handoff_term = HandoffTermination(target="user") # SourceMatchTermination - stop when specific agent responds source_term = SourceMatchTermination(sources=["final_reviewer"]) # Combine conditions with OR (|) - stops when ANY condition is met combined_or = text_term | max_msg_term | timeout_term # Combine conditions with AND (&) - stops when ALL conditions are met # (less common, but useful for complex scenarios) agent = AssistantAgent( name="assistant", model_client=model_client, system_message="Help with tasks. Say DONE when complete.", ) team = RoundRobinGroupChat( participants=[agent], termination_condition=combined_or, ) result = await team.run(task="Count to 5") print(f"Stopped because: {result.stop_reason}") # Reset termination conditions for reuse await combined_or.reset() asyncio.run(main()) ``` ## Console - Streaming Output to Terminal The `Console` function renders agent messages and team outputs to the terminal with formatting, including support for streaming chunks and inline images in iTerm2. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import MaxMessageTermination from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent = AssistantAgent( name="assistant", model_client=model_client, model_client_stream=True, # Enable streaming ) team = RoundRobinGroupChat( participants=[agent], termination_condition=MaxMessageTermination(3), ) # Basic console output result = await Console(team.run_stream(task="Write a short story about a robot.")) print(f"\nFinal result type: {type(result)}") # Console with statistics result = await Console( agent.run_stream(task="Explain quantum computing briefly."), output_stats=True, # Show token usage and timing ) # Example output with stats: # ---------- Summary ---------- # Number of inner messages: 0 # Total prompt tokens: 25 # Total completion tokens: 150 # Duration: 2.34 seconds asyncio.run(main()) ``` ## Structured Output - Type-Safe Agent Responses Configure agents to return responses in a specific Pydantic model format, enabling type-safe structured data extraction. ```python import asyncio from typing import Literal from pydantic import BaseModel from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient # Define the output structure class SentimentAnalysis(BaseModel): text: str sentiment: Literal["positive", "negative", "neutral"] confidence: float key_phrases: list[str] class MovieReview(BaseModel): title: str rating: float summary: str pros: list[str] cons: list[str] async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # Create agent with structured output analyst = AssistantAgent( name="sentiment_analyst", model_client=model_client, system_message="Analyze the sentiment of the provided text.", output_content_type=SentimentAnalysis, # Specify output type # reflect_on_tool_use is automatically True for structured output ) stream = analyst.run_stream(task="Analyze: 'The new update is amazing! Much faster than before.'") result = await Console(stream) # Access the structured response message = result.messages[-1] if hasattr(message, 'content') and isinstance(message.content, SentimentAnalysis): analysis = message.content print(f"\nSentiment: {analysis.sentiment}") print(f"Confidence: {analysis.confidence}") print(f"Key phrases: {analysis.key_phrases}") # Movie review analyzer reviewer = AssistantAgent( name="movie_reviewer", model_client=model_client, output_content_type=MovieReview, ) await Console(reviewer.run_stream( task="Review the movie 'Inception' by Christopher Nolan" )) asyncio.run(main()) ``` ## Memory - Persistent Agent Context Use memory systems to give agents persistent knowledge across conversations. The `ListMemory` provides simple in-memory storage. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_core.memory import ListMemory, MemoryContent from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # Create memory and add content memory = ListMemory() # Add facts to memory await memory.add(MemoryContent( content="User's name is Alice and she is a software engineer.", mime_type="text/plain" )) await memory.add(MemoryContent( content="User prefers Python for backend development.", mime_type="text/plain" )) await memory.add(MemoryContent( content="User is working on a machine learning project.", mime_type="text/plain" )) # Create agent with memory agent = AssistantAgent( name="assistant", model_client=model_client, memory=[memory], # Pass as list system_message="You are a helpful assistant. Use the information you know about the user.", ) # Agent will use memory context in responses result = await agent.run(task="What programming language should I use for my current project?") print(result.messages[-1].content) # Output will reference Python and ML project from memory # Memory persists across runs result = await agent.run(task="Remind me what I'm working on.") print(result.messages[-1].content) asyncio.run(main()) ``` ## BufferedChatCompletionContext - Limiting Context Size Use model contexts to limit the conversation history sent to the model, useful for managing token limits with long conversations. ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_core.model_context import ( BufferedChatCompletionContext, TokenLimitedChatCompletionContext, ) from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # Buffered context - keeps only last N messages buffered_context = BufferedChatCompletionContext(buffer_size=4) # Keep last 4 messages agent_buffered = AssistantAgent( name="assistant", model_client=model_client, model_context=buffered_context, system_message="You are a helpful assistant.", ) # First few messages await agent_buffered.run(task="My name is Bob.") await agent_buffered.run(task="I live in Seattle.") await agent_buffered.run(task="I work as a data scientist.") await agent_buffered.run(task="My favorite color is blue.") await agent_buffered.run(task="I have a dog named Max.") # With buffer_size=4, earliest messages are forgotten result = await agent_buffered.run(task="What's my name?") print(result.messages[-1].content) # May not remember name if it's outside the buffer # Token-limited context - limits by token count token_context = TokenLimitedChatCompletionContext( model_context=None, # Uses unbounded internally max_tokens=1000, ) agent_token_limited = AssistantAgent( name="assistant", model_client=model_client, model_context=token_context, ) asyncio.run(main()) ``` ## State Management - Saving and Restoring Agent State Save and restore agent and team states for persistence across sessions or checkpointing. ```python import asyncio import json from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import MaxMessageTermination from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent = AssistantAgent( name="assistant", model_client=model_client, ) team = RoundRobinGroupChat( participants=[agent], termination_condition=MaxMessageTermination(3), ) # Run initial conversation await team.run(task="Let's discuss Python programming.") # Save team state state = await team.save_state() state_json = json.dumps(state, indent=2) print(f"Saved state: {state_json[:200]}...") # Save to file for persistence with open("team_state.json", "w") as f: json.dump(state, f) # Later: Create new team and restore state new_agent = AssistantAgent( name="assistant", model_client=model_client, ) new_team = RoundRobinGroupChat( participants=[new_agent], termination_condition=MaxMessageTermination(3), ) # Load state from file with open("team_state.json", "r") as f: loaded_state = json.load(f) await new_team.load_state(loaded_state) # Continue conversation with restored context result = await new_team.run(task="What were we discussing?") print(result.messages[-1].content) # Reset team for fresh start await new_team.reset() asyncio.run(main()) ``` ## UserProxyAgent - Human-in-the-Loop `UserProxyAgent` enables human interaction within agent workflows by prompting for user input during execution. ```python import asyncio from autogen_agentchat.agents import AssistantAgent, UserProxyAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import TextMentionTermination from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") assistant = AssistantAgent( name="assistant", model_client=model_client, system_message="You are a helpful assistant. Ask for clarification when needed. Say DONE when complete.", ) # UserProxyAgent prompts for input when it's their turn user = UserProxyAgent(name="user") team = RoundRobinGroupChat( participants=[assistant, user], termination_condition=TextMentionTermination("DONE"), ) # Run with console - will prompt for user input await Console(team.run_stream( task="Help me plan a birthday party. Ask me questions to understand what I need." )) # Example interaction: # ---------- assistant ---------- # I'd be happy to help plan a birthday party! First, let me ask some questions: # 1. Who is the party for and how old will they be turning? # 2. How many guests are you expecting? # # ---------- user ---------- # (User types: "It's for my daughter turning 7, about 15 kids") # # ---------- assistant ---------- # Great! A 7th birthday party for 15 kids. What theme does your daughter like? # ... asyncio.run(main()) ``` ## Component Serialization - Declarative Configuration AutoGen components can be serialized to and loaded from configuration files, enabling declarative agent definitions. ```python import asyncio import json from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import MaxMessageTermination from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent = AssistantAgent( name="assistant", model_client=model_client, system_message="You are a helpful assistant.", ) team = RoundRobinGroupChat( participants=[agent], termination_condition=MaxMessageTermination(5), ) # Export to configuration config = team.dump_component() config_json = json.dumps(config.model_dump(), indent=2) print(f"Team configuration:\n{config_json}") # Save configuration to file with open("team_config.json", "w") as f: json.dump(config.model_dump(), f, indent=2) # Load from configuration from autogen_core import ComponentModel with open("team_config.json", "r") as f: loaded_config = json.load(f) component_model = ComponentModel(**loaded_config) restored_team = RoundRobinGroupChat.load_component(component_model) # Use restored team result = await restored_team.run(task="Hello!") print(result.messages[-1].content) asyncio.run(main()) ``` ## Summary AutoGen provides a comprehensive framework for building multi-agent AI applications. The core patterns include: single agents with `AssistantAgent` for basic LLM interactions and tool use; multi-agent teams with `RoundRobinGroupChat` for sequential workflows, `SelectorGroupChat` for dynamic speaker selection, and `Swarm` for explicit handoff-based orchestration; and human-in-the-loop with `UserProxyAgent` for interactive applications. The framework supports tool integration through Python functions, `FunctionTool`, or MCP servers via `McpWorkbench`. For production applications, key integration patterns include: streaming responses with `model_client_stream=True` and the `Console` helper; structured outputs with Pydantic models for type-safe data extraction; memory systems for persistent context; model context limits for token management; state serialization for checkpointing; and component configuration for declarative agent definitions. The modular architecture allows mixing and matching these capabilities, with agents usable as tools via `AgentTool` for hierarchical architectures. Teams can be nested for complex workflows, and termination conditions can be combined with `|` and `&` operators for flexible conversation control.