======================== CODE SNIPPETS ======================== TITLE: Install OpenAI Agents SDK (Bash) DESCRIPTION: Installs the OpenAI Agents SDK using pip. This command fetches and installs the necessary libraries to start building agent-based applications. It also shows an alternative using 'uv'. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_2 LANGUAGE: bash CODE: ``` pip install openai-agents # or `uv add openai-agents`, etc ``` ---------------------------------------- TITLE: Complete Realtime Agent Example DESCRIPTION: A comprehensive example demonstrating the creation of a realtime agent, runner configuration with advanced audio and turn detection settings, session management, and event handling. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/quickstart.md#_snippet_5 LANGUAGE: python CODE: ``` import asyncio from agents.realtime import RealtimeAgent, RealtimeRunner async def main(): # Create the agent agent = RealtimeAgent( name="Assistant", instructions="You are a helpful voice assistant. Keep responses brief and conversational.", ) # Set up the runner with configuration runner = RealtimeRunner( starting_agent=agent, config={ "model_settings": { "model_name": "gpt-4o-realtime-preview", "voice": "alloy", "modalities": ["text", "audio"], "input_audio_transcription": { "model": "whisper-1" }, "turn_detection": { "type": "server_vad", "threshold": 0.5, "prefix_padding_ms": 300, "silence_duration_ms": 200 } } } ) # Start the session session = await runner.run() async with session: print("Session started! The agent will stream audio responses in real-time.") # Process events async for event in session: if event.type == "response.audio_transcript.done": print(f"Assistant: {event.transcript}") elif event.type == "conversation.item.input_audio_transcription.completed": print(f"User: {event.transcript}") elif event.type == "error": print(f"Error: {event.error}") break if __name__ == "__main__": asyncio.run(main()) ``` ---------------------------------------- TITLE: Start MCP Filesystem Server DESCRIPTION: Starts the MCP Filesystem server locally using npx. This command specifies the server package and the directory it should have access to, typically a sample files directory. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/filesystem_example/README.md#_snippet_1 LANGUAGE: bash CODE: ``` npx -y "@modelcontextprotocol/server-filesystem" ``` ---------------------------------------- TITLE: Start Git MCP Server DESCRIPTION: Command to start the Git MCP server using uvx. This server exposes Git-related tools that the agent can leverage. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/git_example/README.md#_snippet_1 LANGUAGE: bash CODE: ``` uvx mcp-server-git ``` ---------------------------------------- TITLE: Install OpenAI Agents SDK DESCRIPTION: Installs the OpenAI Agents Python SDK using pip. Alternatively, `uv` can be used for package management. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_2 LANGUAGE: bash CODE: ``` pip install openai-agents # or `uv add openai-agents`, etc ``` ---------------------------------------- TITLE: Install Voice Dependencies DESCRIPTION: Installs the OpenAI Agents SDK with optional voice dependencies. This command is essential for enabling voice functionalities within the SDK. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/voice/quickstart.md#_snippet_0 LANGUAGE: bash CODE: ``` pip install 'openai-agents[voice]' ``` ---------------------------------------- TITLE: Setup Project and Virtual Environment (Bash) DESCRIPTION: This snippet demonstrates the initial steps to create a new project directory, navigate into it, and set up a Python virtual environment. This is a foundational step for managing project dependencies and ensuring isolation. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_0 LANGUAGE: bash CODE: ``` mkdir my_project cd my_project python -m venv .venv ``` ---------------------------------------- TITLE: Create Project and Virtual Environment DESCRIPTION: Steps to create a new project directory and set up a Python virtual environment using the `venv` module. This is a one-time setup for a new project. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_0 LANGUAGE: bash CODE: ``` mkdir my_project cd my_project python -m venv .venv ``` ---------------------------------------- TITLE: Run Hello World Example with OpenAI Agent (Python) DESCRIPTION: A basic example demonstrating how to create an Agent with specific instructions and run it using the Runner. It shows how to get the final output from the agent's response. Requires OPENAI_API_KEY environment variable. SOURCE: https://github.com/openai/openai-agents-python/blob/main/README.md#_snippet_1 LANGUAGE: python CODE: ``` from agents import Agent, Runner agent = Agent(name="Assistant", instructions="You are a helpful assistant") result = Runner.run_sync(agent, "Write a haiku about recursion in programming.") print(result.final_output) # Code within the code, # Functions calling themselves, # Infinite loop's dance. ``` ---------------------------------------- TITLE: Run MCP Prompt Server Example DESCRIPTION: Command to execute the MCP prompt server example. This starts the local server and runs the main script. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/prompt_server/README.md#_snippet_0 LANGUAGE: shell CODE: ``` uv run python examples/mcp/prompt_server/main.py ``` ---------------------------------------- TITLE: Run MCP SSE Example DESCRIPTION: Command to execute the MCP SSE example using the `uv` ASGI server and a Python script. It starts the local SSE server for demonstration purposes. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/sse_example/README.md#_snippet_0 LANGUAGE: bash CODE: ``` uv run python examples/mcp/sse_example/main.py ``` ---------------------------------------- TITLE: Run Research Bot Example DESCRIPTION: Executes the research bot example using `uv` and Python to initiate a product recommendation query for beginner surfboards. This command starts the agent-based research process. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/research_bot/sample_outputs/product_recs.txt#_snippet_0 LANGUAGE: bash CODE: ``` $ uv run python -m examples.research_bot.main ``` ---------------------------------------- TITLE: Install OpenAI Agents SDK DESCRIPTION: Installs the OpenAI Agents SDK using pip. This is a prerequisite for using realtime agents and requires Python 3.9 or higher. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/quickstart.md#_snippet_0 LANGUAGE: bash CODE: ``` pip install openai-agents ``` ---------------------------------------- TITLE: Start and Interact with a Realtime Session DESCRIPTION: Starts a realtime session using the configured runner, sends an initial message, and processes incoming events from the agent, printing transcripts. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/quickstart.md#_snippet_4 LANGUAGE: python CODE: ``` async def main(): # Start the realtime session session = await runner.run() async with session: # Send a text message to start the conversation await session.send_message("Hello! How are you today?") # The agent will stream back audio in real-time (not shown in this example) # Listen for events from the session async for event in session: if event.type == "response.audio_transcript.done": print(f"Assistant: {event.transcript}") elif event.type == "conversation.item.input_audio_transcription.completed": print(f"User: {event.transcript}") # Run the session asyncio.run(main()) ``` ---------------------------------------- TITLE: Run MCP Filesystem Example DESCRIPTION: Executes the main Python script for the MCP Filesystem example using the 'uv' command. This command initiates the agent and its interaction with the MCP server. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/filesystem_example/README.md#_snippet_0 LANGUAGE: bash CODE: ``` uv run python examples/mcp/filesystem_example/main.py ``` ---------------------------------------- TITLE: Run Research Bot Example DESCRIPTION: Execute the multi-agent research bot example using Python. This command initiates the bot's workflow, starting with user input for a research topic. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/research_bot/README.md#_snippet_0 LANGUAGE: bash CODE: ``` python -m examples.research_bot.main ``` ---------------------------------------- TITLE: Install OpenAI Agents SDK (Bash) DESCRIPTION: Instructions for setting up a Python environment using venv or uv, and installing the OpenAI Agents SDK via pip. It also mentions how to install with voice support. SOURCE: https://github.com/openai/openai-agents-python/blob/main/README.md#_snippet_0 LANGUAGE: bash CODE: ``` python -m venv env source env/bin/activate # On Windows: env\Scripts\activate ``` LANGUAGE: bash CODE: ``` uv venv source .venv/bin/activate # On Windows: .venv\Scripts\activate ``` LANGUAGE: bash CODE: ``` pip install openai-agents ``` LANGUAGE: bash CODE: ``` pip install 'openai-agents[voice]' ``` ---------------------------------------- TITLE: Bash: Development Setup and Commands DESCRIPTION: Provides essential bash commands for developers working with the OpenAI Agents Python SDK. This includes installing dependencies using 'uv', and running common development tasks like tests, type checking, linting, and formatting checks via 'make'. SOURCE: https://github.com/openai/openai-agents-python/blob/main/README.md#_snippet_8 LANGUAGE: bash CODE: ``` # Ensure uv is installed uv --version # Install dependencies make sync # Run tests, linter, and typechecker make check # Or run them individually: make tests make mypy make lint make format-check ``` ---------------------------------------- TITLE: Run Financial Research Agent Example DESCRIPTION: Command to execute the financial research agent example from the OpenAI Agents SDK. This initiates the agent's workflow for financial analysis. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/financial_research_agent/README.md#_snippet_0 LANGUAGE: bash CODE: ``` python -m examples.financial_research_agent.main ``` ---------------------------------------- TITLE: Run MCP Git Example DESCRIPTION: Command to execute the main Python script for the MCP Git example. This initiates the agent and its interaction with the local MCP server. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/git_example/README.md#_snippet_0 LANGUAGE: bash CODE: ``` uv run python examples/mcp/git_example/main.py ``` ---------------------------------------- TITLE: Run Streamable HTTP Example DESCRIPTION: Command to execute the Streamable HTTP server example using uvicorn. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/streamablehttp_example/README.md#_snippet_0 LANGUAGE: bash CODE: ``` uv run python examples/mcp/streamablehttp_example/main.py ``` ---------------------------------------- TITLE: Start Application DESCRIPTION: Navigates to the application directory and starts the realtime demo server using 'uv'. This command executes the 'server.py' script, making the application accessible via a web browser. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/realtime/app/README.md#_snippet_1 LANGUAGE: bash CODE: ``` cd examples/realtime/app && uv run python server.py ``` ---------------------------------------- TITLE: Complete Agent Orchestration Workflow with Handoffs and Input Guardrails DESCRIPTION: This comprehensive example integrates agent definitions, handoffs, and an input guardrail into a full asynchronous workflow. It demonstrates how a `Triage Agent` can route queries to specialist agents while enforcing custom validation rules before processing, showcasing a robust multi-agent system. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_9 LANGUAGE: python CODE: ``` from agents import Agent, InputGuardrail, GuardrailFunctionOutput, Runner from pydantic import BaseModel import asyncio class HomeworkOutput(BaseModel): is_homework: bool reasoning: str guardrail_agent = Agent( name="Guardrail check", instructions="Check if the user is asking about homework.", output_type=HomeworkOutput, ) math_tutor_agent = Agent( name="Math Tutor", handoff_description="Specialist agent for math questions", instructions="You provide help with math problems. Explain your reasoning at each step and include examples", ) history_tutor_agent = Agent( name="History Tutor", handoff_description="Specialist agent for historical questions", instructions="You provide assistance with historical queries. Explain important events and context clearly.", ) async def homework_guardrail(ctx, agent, input_data): result = await Runner.run(guardrail_agent, input_data, context=ctx.context) final_output = result.final_output_as(HomeworkOutput) return GuardrailFunctionOutput( output_info=final_output, tripwire_triggered=not final_output.is_homework, ) triage_agent = Agent( name="Triage Agent", instructions="You determine which agent to use based on the user's homework question", handoffs=[history_tutor_agent, math_tutor_agent], input_guardrails=[ InputGuardrail(guardrail_function=homework_guardrail), ], ) async def main(): result = await Runner.run(triage_agent, "who was the first president of the united states?") print(result.final_output) result = await Runner.run(triage_agent, "what is life") print(result.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ---------------------------------------- TITLE: Run Agent Orchestration DESCRIPTION: Executes an agent workflow using the `Runner` class. The `run` method takes a starting agent and a user query, returning the final output of the orchestrated agents. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_6 LANGUAGE: python CODE: ``` from agents import Runner async def main(): result = await Runner.run(triage_agent, "What is the capital of France?") print(result.final_output) ``` ---------------------------------------- TITLE: Create a Realtime Agent Instance DESCRIPTION: Creates an instance of RealtimeAgent, defining its name and conversational instructions. This agent will power the realtime voice interaction. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/quickstart.md#_snippet_2 LANGUAGE: python CODE: ``` agent = RealtimeAgent( name="Assistant", instructions="You are a helpful voice assistant. Keep your responses conversational and friendly.", ) ``` ---------------------------------------- TITLE: Hello World: Run a Simple Agent DESCRIPTION: Demonstrates a basic 'Hello World' example using the OpenAI Agents SDK. It initializes an Agent and runs it with a prompt, printing the final output. Requires the OPENAI_API_KEY environment variable to be set. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/index.md#_snippet_1 LANGUAGE: python CODE: ``` from agents import Agent, Runner agent = Agent(name="Assistant", instructions="You are a helpful assistant") result = Runner.run_sync(agent, "Write a haiku about recursion in programming.") print(result.final_output) # Expected Output: # Code within the code, # Functions calling themselves, # Infinite loop's dance. ``` ---------------------------------------- TITLE: Activate Virtual Environment DESCRIPTION: Command to activate the Python virtual environment in the current terminal session. This must be done each time you start working on the project. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_1 LANGUAGE: bash CODE: ``` source .venv/bin/activate ``` ---------------------------------------- TITLE: Create First Agent DESCRIPTION: Defines a basic agent using the `Agent` class from the `agents` library. Agents are initialized with a name and instructions. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_4 LANGUAGE: python CODE: ``` from agents import Agent agent = Agent( name="Math Tutor", instructions="You provide help with math problems. Explain your reasoning at each step and include examples", ) ``` ---------------------------------------- TITLE: Activate Virtual Environment (Bash) DESCRIPTION: This command activates the previously created Python virtual environment. Activation is necessary to ensure that subsequent commands like package installations use the isolated environment. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_1 LANGUAGE: bash CODE: ``` source .venv/bin/activate ``` ---------------------------------------- TITLE: Combine Agents, Handoffs, and Guardrails (Python) DESCRIPTION: This comprehensive example integrates multiple agents, defines handoff routing, and applies an input guardrail to a workflow. It demonstrates how to create a robust system where a triage agent routes requests, potentially blocked by a guardrail, to specialized tutors. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_10 LANGUAGE: python CODE: ``` from agents import Agent, InputGuardrail, GuardrailFunctionOutput, Runner from agents.exceptions import InputGuardrailTripwireTriggered from pydantic import BaseModel import asyncio class HomeworkOutput(BaseModel): is_homework: bool reasoning: str guardrail_agent = Agent( name="Guardrail check", instructions="Check if the user is asking about homework.", output_type=HomeworkOutput, ) math_tutor_agent = Agent( name="Math Tutor", handoff_description="Specialist agent for math questions", instructions="You provide help with math problems. Explain your reasoning at each step and include examples", ) history_tutor_agent = Agent( name="History Tutor", handoff_description="Specialist agent for historical questions", instructions="You provide assistance with historical queries. Explain important events and context clearly.", ) async def homework_guardrail(ctx, agent, input_data): result = await Runner.run(guardrail_agent, input_data, context=ctx.context) final_output = result.final_output_as(HomeworkOutput) return GuardrailFunctionOutput( output_info=final_output, tripwire_triggered=not final_output.is_homework, ) triage_agent = Agent( name="Triage Agent", instructions="You determine which agent to use based on the user's homework question", handoffs=[history_tutor_agent, math_tutor_agent], input_guardrails=[ InputGuardrail(guardrail_function=homework_guardrail), ], ) async def main(): # Example 1: History question try: result = await Runner.run(triage_agent, "who was the first president of the united states?") print(result.final_output) except InputGuardrailTripwireTriggered as e: print("Guardrail blocked this input:", e) # Example 2: General/philosophical question try: result = await Runner.run(triage_agent, "What is the meaning of life?") print(result.final_output) except InputGuardrailTripwireTriggered as e: print("Guardrail blocked this input:", e) if __name__ == "__main__": asyncio.run(main()) ``` ---------------------------------------- TITLE: Run Agent Workflow with Guardrails and Handoffs (Python) DESCRIPTION: This Python script demonstrates a complete agent workflow. It defines a guardrail agent to check for homework, specialist agents for math and history, and a triage agent that uses the guardrail and handoffs to route user queries. The example includes running the workflow with different inputs and handling potential guardrail triggers. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_8 LANGUAGE: Python CODE: ``` from agents import Agent, InputGuardrail, GuardrailFunctionOutput, Runner from agents.exceptions import InputGuardrailTripwireTriggered from pydantic import BaseModel import asyncio class HomeworkOutput(BaseModel): is_homework: bool reasoning: str guardrail_agent = Agent( name="Guardrail check", instructions="Check if the user is asking about homework.", output_type=HomeworkOutput, ) math_tutor_agent = Agent( name="Math Tutor", handoff_description="Specialist agent for math questions", instructions="You provide help with math problems. Explain your reasoning at each step and include examples", ) history_tutor_agent = Agent( name="History Tutor", handoff_description="Specialist agent for historical questions", instructions="You provide assistance with historical queries. Explain important events and context clearly.", ) async def homework_guardrail(ctx, agent, input_data): result = await Runner.run(guardrail_agent, input_data, context=ctx.context) final_output = result.final_output_as(HomeworkOutput) return GuardrailFunctionOutput( output_info=final_output, tripwire_triggered=not final_output.is_homework, ) triage_agent = Agent( name="Triage Agent", instructions="You determine which agent to use based on the user's homework question", handoffs=[history_tutor_agent, math_tutor_agent], input_guardrails=[ InputGuardrail(guardrail_function=homework_guardrail), ], ) async def main(): # Example 1: History question try: result = await Runner.run(triage_agent, "who was the first president of the united states?") print(result.final_output) except InputGuardrailTripwireTriggered as e: print("Guardrail blocked this input:", e) # Example 2: General/philosophical question try: result = await Runner.run(triage_agent, "What is the meaning of life?") print(result.final_output) except InputGuardrailTripwireTriggered as e: print("Guardrail blocked this input:", e) if __name__ == "__main__": asyncio.run(main()) ``` ---------------------------------------- TITLE: Install OpenAI Agents SDK DESCRIPTION: Installs the OpenAI Agents SDK using pip. Ensure you have Python and pip installed. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/index.md#_snippet_0 LANGUAGE: bash CODE: ``` pip install openai-agents ``` ---------------------------------------- TITLE: Starter Prompt for Writer Agent DESCRIPTION: Example prompt provided to the senior writer agent. It defines the agent's role, responsibilities, and access to tools for synthesizing financial data into reports. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/financial_research_agent/README.md#_snippet_1 LANGUAGE: text CODE: ``` You are a senior financial analyst. You will be provided with the original query and a set of raw search summaries. Your job is to synthesize these into a long‑form markdown report (at least several paragraphs) with a short executive summary. You also have access to tools like `fundamentals_analysis` and `risk_analysis` to get short specialist write‑ups if you want to incorporate them. Add a few follow‑up questions for further research. ``` ---------------------------------------- TITLE: Install Dependencies DESCRIPTION: Installs the required Python dependencies for the realtime demo application using the 'uv' package manager. These include FastAPI for the web framework, uvicorn for the ASGI server, and websockets for real-time communication. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/realtime/app/README.md#_snippet_0 LANGUAGE: bash CODE: ``` uv add fastapi uvicorn websockets ``` ---------------------------------------- TITLE: Initialize Voice Pipeline DESCRIPTION: Configures and initializes a `VoicePipeline` using `SingleAgentVoiceWorkflow`. This sets up the pipeline to process audio through a single agent. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/voice/quickstart.md#_snippet_2 LANGUAGE: python CODE: ``` from agents.voice import SingleAgentVoiceWorkflow, VoicePipeline pipeline = VoicePipeline(workflow=SingleAgentVoiceWorkflow(agent)) ``` ---------------------------------------- TITLE: Set Up Realtime Runner Configuration DESCRIPTION: Configures the RealtimeRunner with the agent and model settings, including the specific model name, desired voice, and enabled modalities (text and audio). SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/quickstart.md#_snippet_3 LANGUAGE: python CODE: ``` runner = RealtimeRunner( starting_agent=agent, config={ "model_settings": { "model_name": "gpt-4o-realtime-preview", "voice": "alloy", "modalities": ["text", "audio"], } } ) ``` ---------------------------------------- TITLE: Start Server DESCRIPTION: Starts the Python server application using the 'uv' ASGI server. This command is essential for running the backend that handles Twilio requests and OpenAI API communication. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/realtime/twilio/README.md#_snippet_0 LANGUAGE: bash CODE: ``` uv run server.py ``` ---------------------------------------- TITLE: Senior Writer Agent Initial Prompt DESCRIPTION: Defines the initial instructions and context provided to the senior writer agent. This prompt guides the agent on its role, how to synthesize information from search summaries and sub-analyst tools, and the expected structure of the final markdown report, including follow-up questions. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/financial_research_agent/README.md#_snippet_2 LANGUAGE: text CODE: ``` You are a senior financial analyst. You will be provided with the original query\nand a set of raw search summaries. Your job is to synthesize these into a\nlong‑form markdown report (at least several paragraphs) with a short executive\nsummary. You also have access to tools like `fundamentals_analysis` and\n`risk_analysis` to get short specialist write‑ups if you want to incorporate them.\nAdd a few follow‑up questions for further research. ``` ---------------------------------------- TITLE: Install openai-agents with LiteLLM support DESCRIPTION: Installs the `openai-agents` Python package with the optional `litellm` dependency group, enabling LiteLLM integration. This command ensures all necessary libraries for LiteLLM functionality are included. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/models/litellm.md#_snippet_0 LANGUAGE: bash CODE: ``` pip install "openai-agents[litellm]" ``` ---------------------------------------- TITLE: Set OpenAI API Key DESCRIPTION: Demonstrates how to set the OpenAI API key, either by exporting it as an environment variable or passing it directly during session creation. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/quickstart.md#_snippet_6 LANGUAGE: bash CODE: ``` export OPENAI_API_KEY="your-api-key-here" ``` LANGUAGE: python CODE: ``` session = await runner.run(model_config={"api_key": "your-api-key"}) ``` ---------------------------------------- TITLE: Run Streamed Voice Demo DESCRIPTION: Executes the streamed voice demonstration script from the command line to start the interactive voice demo. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/voice/streamed/README.md#_snippet_0 LANGUAGE: bash CODE: ``` python -m examples.voice.streamed.main ``` ---------------------------------------- TITLE: Install Agent Visualization Dependency DESCRIPTION: Installs the optional `viz` dependency group for the openai-agents library, which is required to enable the agent visualization features. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/visualization.md#_snippet_0 LANGUAGE: bash CODE: ``` pip install "openai-agents[viz]" ``` ---------------------------------------- TITLE: Define a Guardrail Agent (Python) DESCRIPTION: Creates an agent specifically designed to act as a guardrail, checking user input against predefined criteria. This example defines a 'Guardrail check' agent that determines if a query is homework-related, specifying a Pydantic model for its output. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_8 LANGUAGE: python CODE: ``` from agents import GuardrailFunctionOutput, Agent, Runner from pydantic import BaseModel class HomeworkOutput(BaseModel): is_homework: bool reasoning: str guardrail_agent = Agent( name="Guardrail check", instructions="Check if the user is asking about homework.", output_type=HomeworkOutput, ) ``` ---------------------------------------- TITLE: Run Voice Pipeline with Audio DESCRIPTION: Executes the voice pipeline with provided audio input and streams the results. It uses `numpy` for audio buffer creation and `sounddevice` for playing the synthesized audio output. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/voice/quickstart.md#_snippet_3 LANGUAGE: python CODE: ``` import numpy as np import sounddevice as sd from agents.voice import AudioInput # For simplicity, we'll just create 3 seconds of silence # In reality, you'd get microphone data buffer = np.zeros(24000 * 3, dtype=np.int16) audio_input = AudioInput(buffer=buffer) result = await pipeline.run(audio_input) # Create an audio player using `sounddevice` player = sd.OutputStream(samplerate=24000, channels=1, dtype=np.int16) player.start() # Play the audio stream as it comes in async for event in result.stream(): if event.type == "voice_stream_event_audio": player.write(event.data) ``` ---------------------------------------- TITLE: Create a Single Agent (Python) DESCRIPTION: Defines a basic agent using the `Agent` class from the `agents` library. This agent is configured with a name and specific instructions for its behavior, such as providing help with math problems. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_4 LANGUAGE: python CODE: ``` from agents import Agent agent = Agent( name="Math Tutor", instructions="You provide help with math problems. Explain your reasoning at each step and include examples", ) ``` ---------------------------------------- TITLE: Define Agents and Tools DESCRIPTION: Sets up custom agents, including a Spanish-speaking agent and a main assistant agent. It demonstrates defining a function tool (`get_weather`) and configuring agents with instructions, models, handoffs, and tools. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/voice/quickstart.md#_snippet_1 LANGUAGE: python CODE: ``` import asyncio import random from agents import ( Agent, function_tool, ) from agents.extensions.handoff_prompt import prompt_with_handoff_instructions @function_tool def get_weather(city: str) -> str: """Get the weather for a given city.""" print(f"[debug] get_weather called with city: {city}") choices = ["sunny", "cloudy", "rainy", "snowy"] return f"The weather in {city} is {random.choice(choices)}." spanish_agent = Agent( name="Spanish", handoff_description="A spanish speaking agent.", instructions=prompt_with_handoff_instructions( "You're speaking to a human, so be polite and concise. Speak in Spanish.", ), model="gpt-4o-mini", ) agent = Agent( name="Assistant", instructions=prompt_with_handoff_instructions( "You're speaking to a human, so be polite and concise. If the user speaks in Spanish, handoff to the spanish agent.", ), model="gpt-4o-mini", handoffs=[spanish_agent], tools=[get_weather], ) ``` ---------------------------------------- TITLE: MCP Prompt Server API: Prompt Discovery DESCRIPTION: API documentation for discovering available prompts on the MCP server. This method lists all callable prompts. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/prompt_server/README.md#_snippet_2 LANGUAGE: APIDOC CODE: ``` MCPServerStreamableHttp: show_available_prompts() -> list[str] Description: Lists all available prompts that can be invoked on the MCP server. Parameters: None Returns: A list of strings, where each string is the name of an available prompt. Example: client.show_available_prompts() ``` ---------------------------------------- TITLE: MCP Agent Interaction with Git Server DESCRIPTION: Illustrates the core interaction pattern between an agent and an MCP server. The agent adds the server instance, lists available tools, and runs them when chosen by the LLM. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/git_example/README.md#_snippet_2 LANGUAGE: python CODE: ``` from agents.mcp import MCPServerStdio # ... server setup and agent initialization ... # Add the server instance to the Agent mcp_agents.add(server_instance) # Fetch and cache tools from the MCP server tools = server.list_tools() # When LLM chooses an MCP tool, run it via the server result = server.run_tool(tool_name, tool_args) ``` ---------------------------------------- TITLE: Example: Use LitellmModel with OpenAI Agents SDK DESCRIPTION: Demonstrates how to initialize an `Agent` using `LitellmModel` with a specified model name and API key, and run a conversation. It includes a sample `get_weather` tool and prompts the user for model and API key if not provided as arguments. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/models/litellm.md#_snippet_1 LANGUAGE: python CODE: ``` from __future__ import annotations import asyncio from agents import Agent, Runner, function_tool, set_tracing_disabled from agents.extensions.models.litellm_model import LitellmModel @function_tool def get_weather(city: str): print(f"[debug] getting weather for {city}") return f"The weather in {city} is sunny." async def main(model: str, api_key: str): agent = Agent( name="Assistant", instructions="You only respond in haikus.", model=LitellmModel(model=model, api_key=api_key), tools=[get_weather], ) result = await Runner.run(agent, "What's the weather in Tokyo?") print(result.final_output) if __name__ == "__main__": # First try to get model/api key from args import argparse parser = argparse.ArgumentParser() parser.add_argument("--model", type=str, required=False) parser.add_argument("--api-key", type=str, required=False) args = parser.parse_args() model = args.model if not model: model = input("Enter a model name for Litellm: ") api_key = args.api_key if not api_key: api_key = input("Enter an API key for Litellm: ") asyncio.run(main(model, api_key)) ``` ---------------------------------------- TITLE: Complete Example: Persistent Agent Conversations with SQLiteSession DESCRIPTION: A comprehensive example demonstrating the use of `Agent`, `Runner`, and `SQLiteSession` to create persistent conversations. The agent remembers context across multiple turns by utilizing a file-based SQLite session. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/sessions.md#_snippet_8 LANGUAGE: python CODE: ``` import asyncio from agents import Agent, Runner, SQLiteSession async def main(): # Create an agent agent = Agent( name="Assistant", instructions="Reply very concisely.", ) # Create a session instance that will persist across runs session = SQLiteSession("conversation_123", "conversation_history.db") print("=== Sessions Example ===") print("The agent will remember previous messages automatically.\n") # First turn print("First turn:") print("User: What city is the Golden Gate Bridge in?") result = await Runner.run( agent, "What city is the Golden Gate Bridge in?", session=session ) print(f"Assistant: {result.final_output}") print() # Second turn - the agent will remember the previous conversation print("Second turn:") print("User: What state is it in?") result = await Runner.run( agent, "What state is it in?", session=session ) print(f"Assistant: {result.final_output}") print() # Third turn - continuing the conversation print("Third turn:") print("User: What's the population of that state?") result = await Runner.run( agent, "What's the population of that state?", session=session ) print(f"Assistant: {result.final_output}") print() print("=== Conversation Complete ===") print("Notice how the agent remembered the context from previous turns!") print("Sessions automatically handles conversation history.") if __name__ == "__main__": asyncio.run(main()) ``` ---------------------------------------- TITLE: Import Realtime Agent Components DESCRIPTION: Imports necessary classes (RealtimeAgent, RealtimeRunner) and asyncio for asynchronous operations when building realtime voice agents with the OpenAI Agents SDK. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/quickstart.md#_snippet_1 LANGUAGE: python CODE: ``` import asyncio from agents.realtime import RealtimeAgent, RealtimeRunner ``` ---------------------------------------- TITLE: Agent Interaction with MCP Server DESCRIPTION: Illustrates the core interaction logic within the Python agent. It shows how the MCP server is added to the agent, how tools are listed from the server, and how the agent executes tools provided by the MCP server. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/filesystem_example/README.md#_snippet_2 LANGUAGE: python CODE: ``` from agents.mcp import MCPServerStdio # ... server setup ... # Add the server instance to the Agent mcp_agents.add(server) # Fetch the list of tools from the MCP server tools = server.list_tools() # If the LLM chooses to use an MCP tool, run it # server.run_tool(tool_name, tool_params) ``` ---------------------------------------- TITLE: Python Voice Agent with Tool and Language Handoff DESCRIPTION: Demonstrates creating a voice-enabled agent with the OpenAI Agents Python library. It integrates a `get_weather` tool, handles language switching to a Spanish agent, and streams audio output using `sounddevice`. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/voice/quickstart.md#_snippet_4 LANGUAGE: python CODE: ``` import asyncio import random import numpy as np import sounddevice as sd from agents import ( Agent, function_tool, set_tracing_disabled, ) from agents.voice import ( AudioInput, SingleAgentVoiceWorkflow, VoicePipeline, ) from agents.extensions.handoff_prompt import prompt_with_handoff_instructions @function_tool def get_weather(city: str) -> str: """Get the weather for a given city.""" print(f"[debug] get_weather called with city: {city}") choices = ["sunny", "cloudy", "rainy", "snowy"] return f"The weather in {city} is {random.choice(choices)}." spanish_agent = Agent( name="Spanish", handoff_description="A spanish speaking agent.", instructions=prompt_with_handoff_instructions( "You're speaking to a human, so be polite and concise. Speak in Spanish.", ), model="gpt-4o-mini", ) agent = Agent( name="Assistant", instructions=prompt_with_handoff_instructions( "You're speaking to a human, so be polite and concise. If the user speaks in Spanish, handoff to the spanish agent.", ), model="gpt-4o-mini", handoffs=[spanish_agent], tools=[get_weather], ) async def main(): pipeline = VoicePipeline(workflow=SingleAgentVoiceWorkflow(agent)) buffer = np.zeros(24000 * 3, dtype=np.int16) audio_input = AudioInput(buffer=buffer) result = await pipeline.run(audio_input) # Create an audio player using `sounddevice` player = sd.OutputStream(samplerate=24000, channels=1, dtype=np.int16) player.start() # Play the audio stream as it comes in async for event in result.stream(): if event.type == "voice_stream_event_audio": player.write(event.data) if __name__ == "__main__": asyncio.run(main()) ``` ---------------------------------------- TITLE: Build a Voice Assistant with OpenAI Agents and Tool Use DESCRIPTION: This Python code demonstrates how to set up an AI agent with a custom tool (`get_weather`), handle language-specific handoffs (e.g., to a Spanish agent), and integrate with a voice pipeline for real-time audio interaction. It utilizes `asyncio` for asynchronous operations and `sounddevice` for audio input/output, allowing the agent to speak and respond to user input. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/voice/quickstart.md#_snippet_5 LANGUAGE: python CODE: ``` import asyncio import random import numpy as np import sounddevice as sd from agents import ( Agent, function_tool, set_tracing_disabled, ) from agents.voice import ( AudioInput, SingleAgentVoiceWorkflow, VoicePipeline, ) from agents.extensions.handoff_prompt import prompt_with_handoff_instructions @function_tool def get_weather(city: str) -> str: """Get the weather for a given city.""" print(f"[debug] get_weather called with city: {city}") choices = ["sunny", "cloudy", "rainy", "snowy"] return f"The weather in {city} is {random.choice(choices)}." spanish_agent = Agent( name="Spanish", handoff_description="A spanish speaking agent.", instructions=prompt_with_handoff_instructions( "You're speaking to a human, so be polite and concise. Speak in Spanish.", ), model="gpt-4o-mini", ) agent = Agent( name="Assistant", instructions=prompt_with_handoff_instructions( "You're speaking to a human, so be polite and concise. If the user speaks in Spanish, handoff to the spanish agent.", ), model="gpt-4o-mini", handoffs=[spanish_agent], tools=[get_weather], ) async def main(): pipeline = VoicePipeline(workflow=SingleAgentVoiceWorkflow(agent)) buffer = np.zeros(24000 * 3, dtype=np.int16) audio_input = AudioInput(buffer=buffer) result = await pipeline.run(audio_input) # Create an audio player using `sounddevice` player = sd.OutputStream(samplerate=24000, channels=1, dtype=np.int16) player.start() # Play the audio stream as it comes in async for event in result.stream(): if event.type == "voice_stream_event_audio": player.write(event.data) if __name__ == "__main__": asyncio.run(main()) ``` ---------------------------------------- TITLE: Run Research Bot Example DESCRIPTION: Executes the research bot example using the 'uv' command to research Caribbean vacation spots. It outlines the user's query and the bot's progress and findings. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/research_bot/sample_outputs/vacation.txt#_snippet_0 LANGUAGE: bash CODE: ``` $ uv run python -m examples.research_bot.main What would you like to research? Caribbean vacation spots in April, optimizing for surfing, hiking and water sports View trace: https://platform.openai.com/traces/trace?trace_id=trace_.... Starting research... ✅ Will perform 15 searches ✅ Searching... 15/15 completed ✅ Finishing report... ✅ Report summary ``` ---------------------------------------- TITLE: Local Development Workflow Commands DESCRIPTION: Commands to format, lint, type-check, and run tests locally. These commands leverage the project's Makefile for consistency. SOURCE: https://github.com/openai/openai-agents-python/blob/main/AGENTS.md#_snippet_0 LANGUAGE: bash CODE: ``` make format make lint make mypy ``` LANGUAGE: bash CODE: ``` make tests ``` LANGUAGE: bash CODE: ``` uv run pytest -s -k ``` LANGUAGE: bash CODE: ``` make build-docs ``` LANGUAGE: bash CODE: ``` make coverage ``` ---------------------------------------- TITLE: Handoff Prompt Members DESCRIPTION: This section documents members of the agents.extensions.handoff_prompt module. It includes RECOMMENDED_PROMPT_PREFIX for setting up initial prompts and prompt_with_handoff_instructions for guiding agent behavior. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ref/extensions/handoff_prompt.md#_snippet_0 LANGUAGE: python CODE: ``` from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX from agents.extensions.handoff_prompt import prompt_with_handoff_instructions ``` ---------------------------------------- TITLE: Run OpenAI Agents Python Tests DESCRIPTION: Executes all tests for the OpenAI Agents Python project. Ensure `uv` is installed and `make sync` has been run prior to execution. SOURCE: https://github.com/openai/openai-agents-python/blob/main/tests/README.md#_snippet_0 LANGUAGE: shell CODE: ``` make tests ``` ---------------------------------------- TITLE: MCP Prompt Server API: Generate Code Review Instructions DESCRIPTION: API documentation for generating code review instructions. This prompt allows specifying the focus of the review and the programming language. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/prompt_server/README.md#_snippet_3 LANGUAGE: APIDOC CODE: ``` MCPServerStreamableHttp: generate_code_review_instructions(focus: str, language: str) -> str Description: Generates system instructions tailored for code review based on specified focus and language. Parameters: focus (str): The specific area of code review (e.g., "security vulnerabilities", "performance issues"). language (str): The programming language of the code to be reviewed (e.g., "python", "javascript"). Returns: A string containing the generated system instructions for the agent. Example: instructions = client.generate_code_review_instructions(focus="security vulnerabilities", language="python") ``` ---------------------------------------- TITLE: MCP Prompt Server Workflow: Code Review DESCRIPTION: Demonstrates a full workflow using the MCP prompt server. It calls a specific prompt to generate code review instructions, creates an agent with these instructions, and runs the agent against sample vulnerable code. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/mcp/prompt_server/README.md#_snippet_1 LANGUAGE: python CODE: ``` from agents.mcp import MCPServerStreamableHttp # Assuming server is running locally at http://localhost:8000/mcp mcp_client = MCPServerStreamableHttp("http://localhost:8000/mcp") # 1. Discover available prompts print("Available prompts:") for prompt in mcp_client.show_available_prompts(): print(f"- {prompt}") # 2. Demo code review prompt workflow print("\nRunning code review demo...") review_instructions = mcp_client.generate_code_review_instructions( focus="security vulnerabilities", language="python" ) print(f"Generated instructions: {review_instructions}") # In a real scenario, you would now create an agent using these instructions: # from agents import Agent # agent = Agent(system_instructions=review_instructions) # agent.run(code_to_review="import os\nos.system('echo vulnerable')") print("Code review demo finished. Agent creation and execution would follow.") ``` ---------------------------------------- TITLE: Install LiteLLM Dependency DESCRIPTION: Installs the necessary dependency group for integrating LiteLLM with OpenAI Agents, enabling the use of non-OpenAI models. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/models/index.md#_snippet_0 LANGUAGE: bash CODE: ``` pip install "openai-agents[litellm]" ``` ---------------------------------------- TITLE: Quick Start: Agent Session Memory DESCRIPTION: Demonstrates how to use the SQLiteSession for managing conversation history across multiple agent turns. Shows both asynchronous and synchronous runner usage. SOURCE: https://github.com/openai/openai-agents-python/blob/main/README.md#_snippet_4 LANGUAGE: python CODE: ``` from agents import Agent, Runner, SQLiteSession # Create agent agent = Agent( name="Assistant", instructions="Reply very concisely.", ) # Create a session instance session = SQLiteSession("conversation_123") # First turn # result = await Runner.run( # agent, # "What city is the Golden Gate Bridge in?", # session=session # ) # print(result.final_output) # "San Francisco" # Second turn - agent automatically remembers previous context # result = await Runner.run( # agent, # "What state is it in?", # session=session # ) # print(result.final_output) # "California" # Also works with synchronous runner # result = Runner.run_sync( # agent, # "What's the population?", # session=session # ) # print(result.final_output) # "Approximately 39 million" ``` ---------------------------------------- TITLE: Run Custom Example Provider Script DESCRIPTION: Executes a Python script that demonstrates the usage of a custom LLM provider. This command assumes the environment variables for the custom provider have been set correctly. SOURCE: https://github.com/openai/openai-agents-python/blob/main/examples/model_providers/README.md#_snippet_1 LANGUAGE: bash CODE: ``` python examples/model_providers/custom_example_provider.py ``` ---------------------------------------- TITLE: Define Multiple Agents with Handoff Descriptions (Python) DESCRIPTION: Illustrates how to define multiple specialized agents, each with a unique name, instructions, and a `handoff_description`. The `handoff_description` provides context for routing decisions when agents are part of a larger workflow. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_5 LANGUAGE: python CODE: ``` from agents import Agent history_tutor_agent = Agent( name="History Tutor", handoff_description="Specialist agent for historical questions", instructions="You provide assistance with historical queries. Explain important events and context clearly.", ) math_tutor_agent = Agent( name="Math Tutor", handoff_description="Specialist agent for math questions", instructions="You provide help with math problems. Explain your reasoning at each step and include examples", ) ``` ---------------------------------------- TITLE: Code Style and Type Checking DESCRIPTION: Commands to ensure code adheres to project style guidelines and passes type checks. SOURCE: https://github.com/openai/openai-agents-python/blob/main/AGENTS.md#_snippet_3 LANGUAGE: bash CODE: ``` uv run ruff format ``` LANGUAGE: bash CODE: ``` uv run mypy . ``` ---------------------------------------- TITLE: Run Agent Orchestration (Python) DESCRIPTION: Demonstrates how to execute a workflow orchestrated by an agent, such as the 'Triage Agent'. The `Runner.run` method takes the orchestrating agent and the user's input, returning the final output of the executed workflow. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/ja/quickstart.md#_snippet_7 LANGUAGE: python CODE: ``` from agents import Runner async def main(): result = await Runner.run(triage_agent, "What is the capital of France?") print(result.final_output) ``` ---------------------------------------- TITLE: Set OpenAI API Key DESCRIPTION: Sets the OpenAI API key as an environment variable. This key is required for the SDK to authenticate with OpenAI services. Refer to OpenAI documentation for key creation. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/quickstart.md#_snippet_3 LANGUAGE: bash CODE: ``` export OPENAI_API_KEY=sk-... ``` ---------------------------------------- TITLE: Create Realtime Agent with Handoffs DESCRIPTION: Illustrates how to configure handoffs between different `RealtimeAgent` instances. This allows a main agent to delegate specific tasks or conversation flows to specialized agents, enhancing modularity and specialization. SOURCE: https://github.com/openai/openai-agents-python/blob/main/docs/realtime/guide.md#_snippet_1 LANGUAGE: python CODE: ``` from agents.realtime import RealtimeAgent, realtime_handoff # Specialized agents billing_agent = RealtimeAgent( name="Billing Support", instructions="You specialize in billing and payment issues.", ) technical_agent = RealtimeAgent( name="Technical Support", instructions="You handle technical troubleshooting.", ) # Main agent with handoffs main_agent = RealtimeAgent( name="Customer Service", instructions="You are the main customer service agent. Hand off to specialists when needed.", handoffs=[ realtime_handoff(billing_agent, tool_description="Transfer to billing support"), realtime_handoff(technical_agent, tool_description="Transfer to technical support"), ] ) ```