Try Live
Add Docs
Rankings
Pricing
Docs
Install
Install
Docs
Pricing
More...
More...
Try Live
Rankings
Enterprise
Create API Key
Add Docs
Requests
https://github.com/psf/requests
Admin
Requests is a simple, elegant, and widely-used Python HTTP library that makes sending HTTP requests
...
Tokens:
41,542
Snippets:
322
Trust Score:
7.3
Update:
1 week ago
Context
Skills
Chat
Benchmark
88.3
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# Requests - HTTP for Humans Requests is an elegant and simple HTTP library for Python, designed for human beings. It allows you to send HTTP/1.1 requests extremely easily without needing to manually add query strings to URLs or form-encode POST data. The library provides a clean API for making GET, POST, PUT, DELETE, HEAD, OPTIONS, and PATCH requests, with automatic JSON encoding/decoding, cookie persistence, connection pooling, and SSL verification built-in. The library is one of the most downloaded Python packages, with approximately 30 million downloads per week and over 1 million dependent repositories on GitHub. Requests officially supports Python 3.10+ and provides features including Keep-Alive & Connection Pooling, International Domain Names and URLs, Sessions with Cookie Persistence, Browser-style TLS/SSL Verification, Basic & Digest Authentication, Multi-part File Uploads, SOCKS Proxy Support, Connection Timeouts, and Streaming Downloads. ## GET Request Sends a GET request to retrieve data from the specified URL. The `params` argument allows passing query string parameters as a dictionary, and the response object provides access to status code, headers, content, and JSON decoding. ```python import requests # Basic GET request r = requests.get('https://api.github.com/events') print(r.status_code) # 200 print(r.headers['content-type']) # 'application/json; charset=utf-8' # GET with query parameters payload = {'key1': 'value1', 'key2': 'value2'} r = requests.get('https://httpbin.org/get', params=payload) print(r.url) # https://httpbin.org/get?key1=value1&key2=value2 # Access response content print(r.text) # Response body as unicode text print(r.content) # Response body as bytes print(r.json()) # Parse JSON response into Python dict/list # Check response encoding print(r.encoding) # 'utf-8' r.encoding = 'ISO-8859-1' # Override encoding if needed ``` ## POST Request Sends a POST request with form data, JSON payload, or files. The `data` parameter accepts dictionaries for form-encoded data, while `json` parameter automatically serializes Python objects to JSON and sets the Content-Type header. ```python import requests # POST with form-encoded data payload = {'key1': 'value1', 'key2': 'value2'} r = requests.post('https://httpbin.org/post', data=payload) print(r.json()['form']) # {'key1': 'value1', 'key2': 'value2'} # POST with JSON data (auto-serializes and sets Content-Type header) payload = {'username': 'john', 'email': 'john@example.com'} r = requests.post('https://httpbin.org/post', json=payload) print(r.json()['json']) # {'username': 'john', 'email': 'john@example.com'} # POST with multiple values for same key payload_tuples = [('key1', 'value1'), ('key1', 'value2')] r = requests.post('https://httpbin.org/post', data=payload_tuples) # POST with raw string body r = requests.post('https://httpbin.org/post', data='raw string body') ``` ## PUT, PATCH, DELETE Requests Sends PUT, PATCH, or DELETE requests for updating or removing resources. These methods follow the same pattern as GET and POST, with PUT typically replacing entire resources, PATCH for partial updates, and DELETE for removing resources. ```python import requests # PUT request - typically replaces entire resource r = requests.put('https://httpbin.org/put', data={'key': 'value'}) print(r.status_code) # 200 # PUT with JSON r = requests.put('https://httpbin.org/put', json={'name': 'updated_name'}) # PATCH request - partial update r = requests.patch('https://httpbin.org/patch', data={'field': 'new_value'}) print(r.json()['form']) # {'field': 'new_value'} # DELETE request r = requests.delete('https://httpbin.org/delete') print(r.status_code) # 200 # HEAD request - get headers only, no body r = requests.head('https://httpbin.org/get') print(r.headers) # OPTIONS request - check allowed methods r = requests.options('https://httpbin.org/get') ``` ## Custom Headers Adds custom HTTP headers to requests by passing a dictionary to the `headers` parameter. Headers are case-insensitive and automatically merged with default headers set by Requests. ```python import requests # Add custom headers headers = { 'User-Agent': 'my-app/1.0', 'Accept': 'application/json', 'Authorization': 'Bearer my-token', 'X-Custom-Header': 'custom-value' } r = requests.get('https://httpbin.org/headers', headers=headers) print(r.json()['headers']) # Access response headers (case-insensitive) print(r.headers['Content-Type']) print(r.headers.get('content-type')) # Same value, case-insensitive ``` ## Session Objects Session objects persist settings across requests including cookies, authentication, and headers. They provide connection pooling for improved performance when making multiple requests to the same host. ```python import requests # Create session with persistent settings s = requests.Session() s.headers.update({'User-Agent': 'my-app/1.0', 'Accept': 'application/json'}) s.auth = ('user', 'pass') # All requests through session inherit settings r1 = s.get('https://httpbin.org/cookies/set/sessioncookie/123456789') r2 = s.get('https://httpbin.org/cookies') print(r2.json()['cookies']) # {'sessioncookie': '123456789'} # Session as context manager (auto-closes) with requests.Session() as s: s.get('https://httpbin.org/cookies/set/key/value') r = s.get('https://httpbin.org/cookies') print(r.json()) # Override session settings per-request s = requests.Session() s.headers.update({'x-session': 'true'}) r = s.get('https://httpbin.org/headers', headers={'x-request': 'true'}) # Both x-session and x-request headers are sent ``` ## Authentication Provides built-in support for HTTP Basic and Digest authentication. Custom authentication handlers can be created by subclassing AuthBase. ```python import requests from requests.auth import HTTPBasicAuth, HTTPDigestAuth # HTTP Basic Auth - two equivalent ways r = requests.get('https://httpbin.org/basic-auth/user/pass', auth=('user', 'pass')) r = requests.get('https://httpbin.org/basic-auth/user/pass', auth=HTTPBasicAuth('user', 'pass')) print(r.status_code) # 200 # HTTP Digest Auth r = requests.get('https://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass')) print(r.status_code) # 200 # Custom authentication handler from requests.auth import AuthBase class TokenAuth(AuthBase): def __init__(self, token): self.token = token def __call__(self, r): r.headers['Authorization'] = f'Bearer {self.token}' return r r = requests.get('https://httpbin.org/headers', auth=TokenAuth('my-secret-token')) ``` ## File Uploads Uploads files using multipart/form-data encoding. Files can be specified as file objects, tuples with filename and content, or with explicit content-type headers. ```python import requests # Simple file upload files = {'file': open('report.txt', 'rb')} r = requests.post('https://httpbin.org/post', files=files) # File with explicit filename and content-type files = { 'file': ('report.pdf', open('report.pdf', 'rb'), 'application/pdf') } r = requests.post('https://httpbin.org/post', files=files) # File with custom headers files = { 'file': ('report.csv', open('report.csv', 'rb'), 'text/csv', {'Expires': '0'}) } r = requests.post('https://httpbin.org/post', files=files) # Upload string as file files = {'file': ('data.csv', 'col1,col2\nval1,val2\n')} r = requests.post('https://httpbin.org/post', files=files) # Multiple files files = [ ('images', ('foo.png', open('foo.png', 'rb'), 'image/png')), ('images', ('bar.png', open('bar.png', 'rb'), 'image/png')) ] r = requests.post('https://httpbin.org/post', files=files) # Files with additional form data r = requests.post('https://httpbin.org/post', data={'name': 'report'}, files={'file': open('report.txt', 'rb')}) ``` ## Cookies Manages cookies using dictionaries or RequestsCookieJar objects. Session objects automatically persist cookies across requests. ```python import requests # Send cookies with request cookies = {'session_id': 'abc123', 'user': 'john'} r = requests.get('https://httpbin.org/cookies', cookies=cookies) print(r.json()['cookies']) # {'session_id': 'abc123', 'user': 'john'} # Access response cookies r = requests.get('https://httpbin.org/cookies/set/mycookie/myvalue') print(r.cookies['mycookie']) # 'myvalue' # Use RequestsCookieJar for domain/path-specific cookies jar = requests.cookies.RequestsCookieJar() jar.set('cookie1', 'value1', domain='httpbin.org', path='/cookies') jar.set('cookie2', 'value2', domain='httpbin.org', path='/other') r = requests.get('https://httpbin.org/cookies', cookies=jar) print(r.json()['cookies']) # Only cookie1 matches the path ``` ## Timeouts Specifies connection and read timeouts to prevent requests from hanging indefinitely. Timeout can be a single value (applied to both) or a tuple of (connect_timeout, read_timeout). ```python import requests from requests.exceptions import Timeout, ConnectTimeout, ReadTimeout # Single timeout value for both connect and read try: r = requests.get('https://httpbin.org/delay/10', timeout=5) except Timeout: print('Request timed out') # Separate connect and read timeouts try: r = requests.get('https://httpbin.org/delay/10', timeout=(3.05, 10)) except ConnectTimeout: print('Connection timed out') except ReadTimeout: print('Server did not send data in time') # No timeout (wait forever) - not recommended for production r = requests.get('https://httpbin.org/get', timeout=None) # Session with default timeout s = requests.Session() # Note: timeout must be passed per-request, not set on session r = s.get('https://httpbin.org/get', timeout=5) ``` ## SSL Certificate Verification Controls SSL/TLS certificate verification. By default, Requests verifies certificates. Custom CA bundles can be specified, or verification can be disabled (not recommended for production). ```python import requests # Default: SSL verification enabled r = requests.get('https://github.com') # Verifies certificate # Specify custom CA bundle r = requests.get('https://example.com', verify='/path/to/ca-bundle.crt') # Specify directory of CA certificates r = requests.get('https://example.com', verify='/path/to/certdir/') # Disable verification (NOT recommended for production) r = requests.get('https://self-signed.badssl.com/', verify=False) # Warning: This makes your application vulnerable to MitM attacks # Client-side certificate r = requests.get('https://example.com', cert='/path/client.pem') # Or as tuple: (cert, key) r = requests.get('https://example.com', cert=('/path/client.cert', '/path/client.key')) # Session with persistent SSL settings s = requests.Session() s.verify = '/path/to/ca-bundle.crt' s.cert = '/path/client.pem' ``` ## Proxies Routes requests through HTTP or SOCKS proxies. Proxies can be configured per-request, per-session, or via environment variables. ```python import requests # HTTP/HTTPS proxies proxies = { 'http': 'http://10.10.1.10:3128', 'https': 'http://10.10.1.10:1080', } r = requests.get('http://example.org', proxies=proxies) # Proxy with authentication proxies = { 'http': 'http://user:password@10.10.1.10:3128', 'https': 'http://user:password@10.10.1.10:1080', } r = requests.get('http://example.org', proxies=proxies) # Host-specific proxy proxies = { 'http://specific-host.com': 'http://10.10.1.10:3128', } # SOCKS proxy (requires requests[socks]) # pip install 'requests[socks]' proxies = { 'http': 'socks5://user:pass@host:port', 'https': 'socks5://user:pass@host:port', } r = requests.get('http://example.org', proxies=proxies) # Session with proxy settings s = requests.Session() s.proxies.update(proxies) ``` ## Streaming Downloads Downloads large files efficiently by streaming content in chunks instead of loading everything into memory at once. Use `stream=True` and iterate with `iter_content()` or `iter_lines()`. ```python import requests # Stream large file download with requests.get('https://example.com/large-file.zip', stream=True) as r: r.raise_for_status() with open('large-file.zip', 'wb') as f: for chunk in r.iter_content(chunk_size=8192): f.write(chunk) # Stream with progress tracking import os url = 'https://example.com/large-file.zip' with requests.get(url, stream=True) as r: r.raise_for_status() total_size = int(r.headers.get('content-length', 0)) downloaded = 0 with open('large-file.zip', 'wb') as f: for chunk in r.iter_content(chunk_size=8192): f.write(chunk) downloaded += len(chunk) print(f'Downloaded: {downloaded}/{total_size} bytes') # Stream text line by line (useful for streaming APIs) r = requests.get('https://httpbin.org/stream/20', stream=True) for line in r.iter_lines(decode_unicode=True): if line: print(line) # Access raw socket response r = requests.get('https://httpbin.org/stream/20', stream=True) print(r.raw.read(100)) # Read raw bytes ``` ## Error Handling Handles various exceptions that can occur during requests including connection errors, timeouts, HTTP errors, and invalid responses. Use `raise_for_status()` to raise exceptions for 4xx/5xx responses. ```python import requests from requests.exceptions import ( RequestException, ConnectionError, HTTPError, Timeout, TooManyRedirects, JSONDecodeError ) # Comprehensive error handling try: r = requests.get('https://httpbin.org/status/404', timeout=5) r.raise_for_status() # Raises HTTPError for 4xx/5xx except ConnectionError: print('Failed to connect to server') except Timeout: print('Request timed out') except TooManyRedirects: print('Too many redirects') except HTTPError as e: print(f'HTTP error: {e.response.status_code} - {e.response.reason}') except RequestException as e: print(f'Request failed: {e}') # Check response status without raising exception r = requests.get('https://httpbin.org/status/404') if r.ok: # True if status_code < 400 print('Success') else: print(f'Error: {r.status_code}') # Handle JSON decode errors r = requests.get('https://httpbin.org/html') # Returns HTML, not JSON try: data = r.json() except JSONDecodeError: print('Response is not valid JSON') print(r.text[:100]) ``` ## Prepared Requests Creates PreparedRequest objects for low-level control over the request before sending. Useful for modifying requests after preparation or implementing custom request flows. ```python import requests from requests import Request, Session s = Session() # Create and prepare request manually req = Request('POST', 'https://httpbin.org/post', data={'key': 'value'}, headers={'Custom-Header': 'value'}) prepped = s.prepare_request(req) # Modify prepared request prepped.body = 'Modified body content' prepped.headers['Another-Header'] = 'another-value' # Send prepared request resp = s.send(prepped, timeout=5) print(resp.status_code) # Access the original request from response r = requests.get('https://httpbin.org/get') print(r.request.headers) # Headers sent with request print(r.request.url) # URL that was requested # Merge environment settings for prepared requests settings = s.merge_environment_settings(prepped.url, {}, None, None, None) resp = s.send(prepped, **settings) ``` ## Event Hooks Attaches callback functions to be called at specific points during request processing. The `response` hook is called after receiving the response. ```python import requests # Define hook functions def log_url(response, *args, **kwargs): print(f'Request URL: {response.url}') print(f'Status: {response.status_code}') def add_timestamp(response, *args, **kwargs): import time response.timestamp = time.time() return response # Single hook r = requests.get('https://httpbin.org/get', hooks={'response': log_url}) # Multiple hooks r = requests.get('https://httpbin.org/get', hooks={'response': [log_url, add_timestamp]}) print(r.timestamp) # Session-level hooks (applied to all requests) s = requests.Session() s.hooks['response'].append(log_url) s.hooks['response'].append(add_timestamp) r = s.get('https://httpbin.org/get') # Both hooks called r = s.get('https://httpbin.org/headers') # Both hooks called again ``` ## Response Object Properties The Response object contains all information returned by the server including status, headers, cookies, content, and the original request. Multiple properties and methods are available for accessing different representations of the response. ```python import requests r = requests.get('https://httpbin.org/get') # Status information print(r.status_code) # 200 print(r.reason) # 'OK' print(r.ok) # True (status_code < 400) print(r.is_redirect) # False print(r.is_permanent_redirect) # False # Headers (case-insensitive dict) print(r.headers) # All response headers print(r.headers['Content-Type']) print(r.headers.get('X-Missing', 'default')) # Content in various formats print(r.content) # bytes print(r.text) # unicode string print(r.json()) # parsed JSON (dict/list) print(r.encoding) # detected encoding # URL and history print(r.url) # final URL after redirects print(r.history) # list of redirect responses # Cookies print(r.cookies) # CookieJar print(dict(r.cookies)) # as dict # Request info print(r.request.method) # 'GET' print(r.request.url) print(r.request.headers) # Timing print(r.elapsed) # timedelta of request duration # Links (parsed Link headers) print(r.links) # {'next': {'url': '...', 'rel': 'next'}, ...} ``` ## Redirects and History Controls automatic redirect following and provides access to redirect history. By default, Requests follows redirects for all methods except HEAD. ```python import requests # Default: follows redirects r = requests.get('http://github.com/') print(r.url) # 'https://github.com/' (redirected) print(r.status_code) # 200 print(r.history) # [<Response [301]>] # Disable redirects r = requests.get('http://github.com/', allow_redirects=False) print(r.status_code) # 301 print(r.headers['Location']) # redirect target # Enable redirects for HEAD r = requests.head('http://github.com/', allow_redirects=True) print(r.url) # Final URL print(r.history) # Redirect chain # Access next redirect without following r = requests.get('http://github.com/', allow_redirects=False) if r.is_redirect: print(r.next) # PreparedRequest for next redirect # Session with max redirects s = requests.Session() s.max_redirects = 5 # Limit redirect chain length ``` ## Summary Requests is the de facto standard for making HTTP requests in Python, providing an intuitive API that handles the complexity of HTTP communication while remaining flexible enough for advanced use cases. The library excels at common tasks like consuming REST APIs, authenticating with web services, uploading files, and downloading content, all with minimal boilerplate code and sensible defaults for SSL verification, connection pooling, and encoding detection. For integration patterns, Requests works well as a standalone library for simple scripts, within Session contexts for applications making multiple requests to the same service, or combined with asynchronous frameworks like grequests or httpx for concurrent operations. The extensible authentication system, transport adapters, and hook mechanisms allow customization for enterprise requirements including custom SSL configurations, retry logic, and specialized authentication schemes like OAuth or NTLM through community-maintained extensions.