# OpenAI Platform API Documentation The OpenAI Platform provides a comprehensive REST API for integrating state-of-the-art AI models into applications. This platform offers text generation, vision capabilities, audio processing, embeddings, image generation, and agentic workflows through a simple HTTP interface. The API supports multiple programming languages via official SDKs (Python, JavaScript/TypeScript, .NET, Java, Go) and includes advanced features like streaming responses, function calling, batch processing, and fine-tuning. The platform is built around a unified Responses API that handles text generation, multimodal inputs (images, files, audio), tool usage (web search, file search, custom functions, MCP servers), and streaming output. Developers can build conversational applications, autonomous agents, search-enhanced systems, and complex workflows using models ranging from the flagship GPT-5 series to specialized reasoning models (o-series) and cost-efficient variants (mini, nano). Authentication uses API keys, and pricing follows a token-based model with multiple processing tiers (standard, flex, batch, priority) offering different latency and cost tradeoffs. ## API Reference ### Create a Response (Text Generation) Generate text output from a prompt using the Responses API. ```bash curl https://api.openai.com/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-5", "input": "Write a short bedtime story about a unicorn." }' ``` ```javascript import OpenAI from "openai"; const client = new OpenAI(); const response = await client.responses.create({ model: "gpt-5", input: "Write a short bedtime story about a unicorn." }); console.log(response.output_text); ``` ```python from openai import OpenAI client = OpenAI() response = client.responses.create( model="gpt-5", input="Write a short bedtime story about a unicorn." ) print(response.output_text) ``` ### Create a Response with Image Input Analyze images by passing image URLs or uploaded files to vision-capable models. ```bash curl "https://api.openai.com/v1/responses" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-5", "input": [ { "role": "user", "content": [ { "type": "input_text", "text": "What is in this image?" }, { "type": "input_image", "image_url": "https://openai-documentation.vercel.app/images/cat_and_otter.png" } ] } ] }' ``` ```javascript import OpenAI from "openai"; const client = new OpenAI(); const response = await client.responses.create({ model: "gpt-5", input: [ { role: "user", content: [ { type: "input_text", text: "What is in this image?", }, { type: "input_image", image_url: "https://openai-documentation.vercel.app/images/cat_and_otter.png", }, ], }, ], }); console.log(response.output_text); ``` ### Upload and Use File as Input Upload files (PDFs, documents) and pass them as input for analysis. ```bash # First upload the file curl https://api.openai.com/v1/files \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -F purpose="user_data" \ -F file="@document.pdf" # Then use the file_id in a response curl "https://api.openai.com/v1/responses" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-5", "input": [ { "role": "user", "content": [ { "type": "input_file", "file_id": "file-abc123" }, { "type": "input_text", "text": "Summarize the key points in this document." } ] } ] }' ``` ```javascript import fs from "fs"; import OpenAI from "openai"; const client = new OpenAI(); const file = await client.files.create({ file: fs.createReadStream("document.pdf"), purpose: "user_data", }); const response = await client.responses.create({ model: "gpt-5", input: [ { role: "user", content: [ { type: "input_file", file_id: file.id, }, { type: "input_text", text: "Summarize the key points in this document.", }, ], }, ], }); console.log(response.output_text); ``` ```python from openai import OpenAI client = OpenAI() file = client.files.create( file=open("document.pdf", "rb"), purpose="user_data" ) response = client.responses.create( model="gpt-5", input=[ { "role": "user", "content": [ { "type": "input_file", "file_id": file.id, }, { "type": "input_text", "text": "Summarize the key points in this document.", }, ] } ] ) print(response.output_text) ``` ### Use Web Search Tool Enable web search to allow models to retrieve current information from the internet. ```bash curl "https://api.openai.com/v1/responses" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-5", "tools": [{"type": "web_search"}], "input": "What was the most significant tech announcement today?" }' ``` ```javascript import OpenAI from "openai"; const client = new OpenAI(); const response = await client.responses.create({ model: "gpt-5", tools: [ { type: "web_search" }, ], input: "What was the most significant tech announcement today?", }); console.log(response.output_text); ``` ```python from openai import OpenAI client = OpenAI() response = client.responses.create( model="gpt-5", tools=[{"type": "web_search"}], input="What was the most significant tech announcement today?" ) print(response.output_text) ``` ### Function Calling Define custom functions that the model can call to extend its capabilities. ```javascript import OpenAI from "openai"; const client = new OpenAI(); const tools = [ { type: "function", name: "get_weather", description: "Get current temperature for a given location.", parameters: { type: "object", properties: { location: { type: "string", description: "City and country e.g. Paris, France", }, }, required: ["location"], additionalProperties: false, }, strict: true, }, ]; const response = await client.responses.create({ model: "gpt-5", input: [ { role: "user", content: "What is the weather like in Paris today?" }, ], tools, }); console.log(response.output[0].to_json()); ``` ```python from openai import OpenAI client = OpenAI() tools = [ { "type": "function", "name": "get_weather", "description": "Get current temperature for a given location.", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "City and country e.g. Paris, France", } }, "required": ["location"], "additionalProperties": False, }, "strict": True, }, ] response = client.responses.create( model="gpt-5", input=[ {"role": "user", "content": "What is the weather like in Paris today?"}, ], tools=tools, ) print(response.output[0].to_json()) ``` ```bash curl -X POST https://api.openai.com/v1/responses \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-5", "input": [ {"role": "user", "content": "What is the weather like in Paris today?"} ], "tools": [ { "type": "function", "name": "get_weather", "description": "Get current temperature for a given location.", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "City and country e.g. Paris, France" } }, "required": ["location"], "additionalProperties": false }, "strict": true } ] }' ``` ### File Search Tool Search through uploaded documents using vector stores. ```javascript import OpenAI from "openai"; const openai = new OpenAI(); const response = await openai.responses.create({ model: "gpt-4.1", input: "What is deep research by OpenAI?", tools: [ { type: "file_search", vector_store_ids: ["vs_abc123"], }, ], }); console.log(response.output_text); ``` ```python from openai import OpenAI client = OpenAI() response = client.responses.create( model="gpt-4.1", input="What is deep research by OpenAI?", tools=[{ "type": "file_search", "vector_store_ids": ["vs_abc123"] }] ) print(response.output_text) ``` ### Remote MCP Server Integration Connect to Model Context Protocol (MCP) servers for custom data sources and capabilities. ```bash curl https://api.openai.com/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-5", "tools": [ { "type": "mcp", "server_label": "custom_server", "server_description": "Custom data server for specialized queries", "server_url": "https://mcp.example.com/sse", "require_approval": "never" } ], "input": "Query my custom data source for recent updates" }' ``` ```javascript import OpenAI from "openai"; const client = new OpenAI(); const response = await client.responses.create({ model: "gpt-5", tools: [ { type: "mcp", server_label: "custom_server", server_description: "Custom data server for specialized queries", server_url: "https://mcp.example.com/sse", require_approval: "never", }, ], input: "Query my custom data source for recent updates", }); console.log(response.output_text); ``` ```python from openai import OpenAI client = OpenAI() response = client.responses.create( model="gpt-5", tools=[ { "type": "mcp", "server_label": "custom_server", "server_description": "Custom data server for specialized queries", "server_url": "https://mcp.example.com/sse", "require_approval": "never", }, ], input="Query my custom data source for recent updates", ) print(response.output_text) ``` ### Stream Responses Stream model output as it's generated using server-sent events. ```javascript import { OpenAI } from "openai"; const client = new OpenAI(); const stream = await client.responses.create({ model: "gpt-5", input: [ { role: "user", content: "Write a poem about artificial intelligence.", }, ], stream: true, }); for await (const event of stream) { console.log(event); } ``` ```python from openai import OpenAI client = OpenAI() stream = client.responses.create( model="gpt-5", input=[ { "role": "user", "content": "Write a poem about artificial intelligence.", }, ], stream=True, ) for event in stream: print(event) ``` ### Create Embeddings Generate vector embeddings for text to enable semantic search, clustering, and recommendations. ```bash curl https://api.openai.com/v1/embeddings \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "text-embedding-3-small", "input": "The quick brown fox jumps over the lazy dog" }' ``` ```javascript import OpenAI from "openai"; const client = new OpenAI(); const embedding = await client.embeddings.create({ model: "text-embedding-3-small", input: "The quick brown fox jumps over the lazy dog", }); console.log(embedding.data[0].embedding); ``` ```python from openai import OpenAI client = OpenAI() embedding = client.embeddings.create( model="text-embedding-3-small", input="The quick brown fox jumps over the lazy dog" ) print(embedding.data[0].embedding) ``` ### Batch Processing Submit large volumes of requests for asynchronous processing at reduced costs. ```bash # Create batch input file (JSONL) # batch_input.jsonl: # {"custom_id": "request-1", "method": "POST", "url": "/v1/responses", "body": {"model": "gpt-5", "input": "Hello world"}} # Upload the batch file curl https://api.openai.com/v1/files \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -F purpose="batch" \ -F file="@batch_input.jsonl" # Create batch job curl https://api.openai.com/v1/batches \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "input_file_id": "file-abc123", "endpoint": "/v1/responses", "completion_window": "24h" }' # Check batch status curl https://api.openai.com/v1/batches/batch_abc123 \ -H "Authorization: Bearer $OPENAI_API_KEY" ``` ```python from openai import OpenAI client = OpenAI() # Upload batch input file batch_file = client.files.create( file=open("batch_input.jsonl", "rb"), purpose="batch" ) # Create batch batch = client.batches.create( input_file_id=batch_file.id, endpoint="/v1/responses", completion_window="24h" ) print(f"Batch ID: {batch.id}") # Check status batch_status = client.batches.retrieve(batch.id) print(f"Status: {batch_status.status}") ``` ### Build Multi-Agent Systems Create agent workflows with handoffs and specialized roles using the Agents SDK. ```javascript import { Agent, run } from '@openai/agents'; const spanishAgent = new Agent({ name: 'Spanish agent', instructions: 'You only speak Spanish.', }); const englishAgent = new Agent({ name: 'English agent', instructions: 'You only speak English', }); const triageAgent = new Agent({ name: 'Triage agent', instructions: 'Handoff to the appropriate agent based on the language of the request.', handoffs: [spanishAgent, englishAgent], }); const result = await run(triageAgent, 'Hola, ¿cómo estás?'); console.log(result.finalOutput); ``` ```python from agents import Agent, Runner import asyncio spanish_agent = Agent( name="Spanish agent", instructions="You only speak Spanish.", ) english_agent = Agent( name="English agent", instructions="You only speak English", ) triage_agent = Agent( name="Triage agent", instructions="Handoff to the appropriate agent based on the language of the request.", handoffs=[spanish_agent, english_agent], ) async def main(): result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?") print(result.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ### Environment Setup Configure API authentication by exporting your API key as an environment variable. ```bash # macOS / Linux export OPENAI_API_KEY="sk-proj-your-api-key-here" # Windows PowerShell setx OPENAI_API_KEY "sk-proj-your-api-key-here" ``` ```javascript // JavaScript/Node.js // The SDK automatically reads from process.env.OPENAI_API_KEY import OpenAI from "openai"; const client = new OpenAI(); ``` ```python # Python # The SDK automatically reads from os.environ.get("OPENAI_API_KEY") from openai import OpenAI client = OpenAI() ``` ### Install Official SDKs ```bash # JavaScript/TypeScript (Node.js, Deno, Bun) npm install openai # Python pip install openai # .NET dotnet add package OpenAI # Java # Add to pom.xml: # # com.openai # openai-java # 4.0.0 # # Go # import "github.com/openai/openai-go" ``` ## Integration Patterns and Use Cases The OpenAI Platform excels at building conversational AI applications, autonomous agents, content generation systems, and data analysis tools. Common integration patterns include chat interfaces using the Responses API with streaming for real-time output, retrieval-augmented generation (RAG) systems combining file search or MCP servers with text generation, and multi-step agent workflows using function calling or the Agents SDK. The platform supports cost optimization through batch processing for non-time-sensitive workloads, prompt caching to reduce redundant processing costs, and flexible tier selection (flex for lower costs with higher latency, priority for faster processing). Token-based pricing means inputs and outputs are charged separately, with cached inputs offering significant discounts. Advanced use cases include computer use capabilities for automating UI interactions, deep research workflows using specialized reasoning models (o-series) with tool access, multimodal applications processing images and PDFs alongside text, and custom tool integration via MCP servers for proprietary data sources. The API supports fine-tuning for domain-specific adaptation, structured outputs with JSON schema enforcement, and real-time voice applications through the Realtime API. Security features include prompt caching, organization-level access controls, audit logging, and certificate management. The platform provides comprehensive error handling, rate limiting, and monitoring through the dashboard, making it suitable for production deployments ranging from simple chatbots to complex enterprise AI systems.