Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
AWS SDK for JavaScript
https://github.com/aws/aws-sdk-js-v3
Admin
The AWS SDK for JavaScript v3 is a modular and feature-rich SDK for interacting with Amazon Web
...
Tokens:
5,514,939
Snippets:
18,467
Trust Score:
9.6
Update:
2 weeks ago
Context
Skills
Chat
Benchmark
89.5
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# AWS SDK for JavaScript v3 The AWS SDK for JavaScript v3 is a modular rewrite of v2 designed to work with Amazon Web Services. It features a separate package for each AWS service, first-class TypeScript support, and a powerful middleware stack architecture. The SDK enables developers to easily interact with over 400 AWS services through a consistent, promise-based API that works in both Node.js and browser environments. The SDK's modular architecture allows importing only the services and commands needed, significantly reducing bundle sizes compared to v2. Key features include async generator-based paginators, the AbortController interface for canceling requests, streaming response handling, and comprehensive credential provider options. The SDK supports multiple authentication methods including IAM credentials, Cognito Identity, Web Identity Federation, and EC2/ECS metadata services. ## Creating and Configuring Service Clients Service clients are the primary interface for making AWS API calls. Each AWS service has its own client package with a modular (bare-bones) and aggregated version. ```javascript // ES6 imports - Modular approach (recommended for tree-shaking) import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3"; // Create client with region from environment or explicit configuration const s3Client = new S3Client({ region: "us-west-2", // Optional: explicit credentials (prefer environment/IAM role in production) credentials: { accessKeyId: "AKIAIOSFODNN7EXAMPLE", secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", }, }); // Using Command pattern (tree-shaking compatible) const getResult = await s3Client.send( new GetObjectCommand({ Bucket: "my-bucket", Key: "my-file.txt", }) ); // Aggregated client approach (simpler but larger bundle) import { S3 } from "@aws-sdk/client-s3"; const s3 = new S3({ region: "us-west-2" }); const result = await s3.getObject({ Bucket: "my-bucket", Key: "my-file.txt" }); console.log(await result.Body.transformToString()); ``` ## HTTP Request Handler Configuration Configure connection timeouts, socket limits, and request timeouts for optimal performance and reliability. ```javascript import { DynamoDBClient } from "@aws-sdk/client-dynamodb"; import { NodeHttpHandler } from "@aws-sdk/config/requestHandler"; import https from "node:https"; // Full configuration with NodeHttpHandler const client = new DynamoDBClient({ region: "us-west-2", requestHandler: new NodeHttpHandler({ httpsAgent: new https.Agent({ keepAlive: true, maxSockets: 200, // default is 50 per client }), connectionTimeout: 5000, // ms to establish connection requestTimeout: 10000, // ms for full request/response socketTimeout: 6000, // ms for idle socket throwOnRequestTimeout: true, // throw instead of warn on timeout }), }); // Simplified syntax (v3.521.0+) - objects interpreted as constructor params const simplifiedClient = new DynamoDBClient({ region: "us-west-2", requestHandler: { httpsAgent: { maxSockets: 100 }, connectionTimeout: 5000, requestTimeout: 10000, }, }); // Browser configuration with FetchHttpHandler import { FetchHttpHandler } from "@aws-sdk/config/requestHandler"; const browserClient = new DynamoDBClient({ region: "us-west-2", requestHandler: new FetchHttpHandler({ requestTimeout: 30000, keepAlive: true, }), }); ``` ## Retry Strategy Configuration Configure automatic retries with exponential backoff for transient errors. ```javascript import { S3Client } from "@aws-sdk/client-s3"; import { StandardRetryStrategy, ConfiguredRetryStrategy, AdaptiveRetryStrategy } from "@aws-sdk/config/retryStrategy"; // Simple max attempts configuration const clientSimple = new S3Client({ region: "us-west-2", maxAttempts: 5, }); // Custom retry strategy with backoff const clientCustom = new S3Client({ region: "us-west-2", retryStrategy: new ConfiguredRetryStrategy( 5, // max attempts (attempt) => 500 + attempt * 1000 // 500ms + 1s per attempt backoff ), }); // Adaptive retry with rate limiting const clientAdaptive = new S3Client({ region: "us-west-2", retryMode: "ADAPTIVE", // or use AdaptiveRetryStrategy directly }); // Custom retry wrapper for fine-grained control import { S3, S3ServiceException, GetObjectCommand } from "@aws-sdk/client-s3"; async function withRetry(client, command, maxAttempts = 3) { let lastError; for (let attempt = 1; attempt <= maxAttempts; attempt++) { try { return await client.send(command); } catch (e) { lastError = e; const statusCode = e.$metadata?.httpStatusCode; if (statusCode >= 400 && statusCode < 500 && statusCode !== 429) { throw e; // Don't retry client errors (except throttling) } await new Promise(r => setTimeout(r, Math.pow(2, attempt) * 100)); } } throw lastError; } ``` ## Credential Providers The SDK supports multiple credential provider strategies for different deployment scenarios. ```javascript import { S3Client } from "@aws-sdk/client-s3"; import { fromIni, fromEnv, fromCognitoIdentityPool, fromTemporaryCredentials, fromInstanceMetadata, fromContainerMetadata, fromTokenFile, } from "@aws-sdk/credential-providers"; // From shared credentials file (~/.aws/credentials) const clientFromIni = new S3Client({ region: "us-west-2", credentials: fromIni({ profile: "my-profile", mfaCodeProvider: async (mfaSerial) => { // Return MFA code from user input return "123456"; }, }), }); // Cognito Identity Pool (for browser/mobile apps) const clientCognito = new S3Client({ region: "us-east-1", credentials: fromCognitoIdentityPool({ identityPoolId: "us-east-1:12345678-1234-1234-1234-123456789012", logins: { "accounts.google.com": googleIdToken, }, }), }); // Assume Role with temporary credentials const clientAssumeRole = new S3Client({ region: "us-west-2", credentials: fromTemporaryCredentials({ params: { RoleArn: "arn:aws:iam::123456789012:role/MyRole", RoleSessionName: "my-session", DurationSeconds: 3600, }, }), }); // EC2 Instance Metadata (for EC2 instances) const clientEC2 = new S3Client({ region: "us-west-2", credentials: fromInstanceMetadata({ maxRetries: 3 }), }); // ECS Container Metadata (for ECS tasks) const clientECS = new S3Client({ region: "us-west-2", credentials: fromContainerMetadata(), }); // Custom credential function with refresh const clientCustom = new S3Client({ credentials: async () => ({ accessKeyId: "...", secretAccessKey: "...", sessionToken: "...", expiration: new Date(Date.now() + 3600000), // 1 hour }), }); ``` ## Middleware Stack Add custom logic to the request/response lifecycle for logging, metrics, headers, and transformations. ```javascript import { S3Client, ListBucketsCommand } from "@aws-sdk/client-s3"; const client = new S3Client({ region: "us-west-2" }); // Add middleware to log requests and responses client.middlewareStack.add( (next, context) => async (args) => { console.log("Request:", { clientName: context.clientName, commandName: context.commandName, input: args.input, }); const startTime = Date.now(); const result = await next(args); console.log("Response:", { duration: Date.now() - startTime, statusCode: result.response.statusCode, requestId: result.output.$metadata.requestId, }); return result; }, { step: "build", name: "LoggingMiddleware", override: true, } ); // Add custom headers client.middlewareStack.add( (next) => async (args) => { args.request.headers["x-custom-header"] = "my-value"; args.request.headers["x-correlation-id"] = crypto.randomUUID(); return next(args); }, { step: "build", name: "CustomHeadersMiddleware", priority: "high", override: true, } ); // Middleware steps: initialize -> serialize -> build -> finalizeRequest -> deserialize await client.send(new ListBucketsCommand({})); ``` ## Error Handling Handle AWS service errors with proper type checking and metadata access. ```javascript import { S3, S3ServiceException, NoSuchBucket, NoSuchKey } from "@aws-sdk/client-s3"; const s3 = new S3({ region: "us-west-2" }); function isServiceError(e) { return !!(e).$metadata; } try { await s3.getObject({ Bucket: "my-bucket", Key: "my-key" }); } catch (e) { if (!isServiceError(e)) { throw e; // Re-throw non-SDK errors } // Access error metadata console.log("Request ID:", e.$metadata.requestId); console.log("HTTP Status:", e.$metadata.httpStatusCode); console.log("Error Name:", e.name); console.log("Error Message:", e.message); // Handle specific error types by name switch (e.name) { case "NoSuchBucket": console.log("Bucket does not exist"); break; case "NoSuchKey": console.log("Object key not found"); break; case "AccessDenied": console.log("Permission denied"); break; default: console.log("Unexpected error:", e.name); } // Or use instanceof (works with Symbol.hasInstance) if (e instanceof NoSuchKey) { console.log("Key not found"); } // Access raw response for debugging if (e.$response) { console.log("Response headers:", e.$response.headers); } if (e.$responseBodyText) { console.log("Raw response body:", e.$responseBodyText); } } ``` ## Paginators Use async iterators to efficiently process paginated API responses. ```javascript import { S3Client, paginateListObjectsV2, ListObjectsV2Command, } from "@aws-sdk/client-s3"; const client = new S3Client({ region: "us-west-2" }); // Recommended: Using paginator helper async function listAllObjects(bucket, prefix) { const objects = []; for await (const page of paginateListObjectsV2( { client, pageSize: 1000 }, { Bucket: bucket, Prefix: prefix } )) { objects.push(...(page.Contents || [])); console.log(`Fetched ${page.Contents?.length || 0} objects`); } return objects; } const allObjects = await listAllObjects("my-bucket", "data/"); console.log(`Total objects: ${allObjects.length}`); // Manual pagination with token async function listObjectsManual(bucket) { let continuationToken; let totalCount = 0; do { const response = await client.send( new ListObjectsV2Command({ Bucket: bucket, MaxKeys: 1000, ContinuationToken: continuationToken, }) ); totalCount += response.Contents?.length || 0; continuationToken = response.NextContinuationToken; } while (continuationToken); return totalCount; } // DynamoDB Scan pagination import { DynamoDBClient } from "@aws-sdk/client-dynamodb"; import { DynamoDBDocumentClient, paginateScan } from "@aws-sdk/lib-dynamodb"; const ddbClient = DynamoDBDocumentClient.from(new DynamoDBClient({})); for await (const page of paginateScan( { client: ddbClient }, { TableName: "my-table", Limit: 100 } )) { console.log("Items:", page.Items); } ``` ## Streams and Response Handling Handle streaming responses properly to avoid memory issues and socket exhaustion. ```javascript import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3"; import { createWriteStream, createReadStream } from "fs"; import { pipeline } from "stream/promises"; const client = new S3Client({ region: "us-west-2" }); // Get object and consume stream (IMPORTANT: always consume the stream) const response = await client.send( new GetObjectCommand({ Bucket: "my-bucket", Key: "my-file.txt" }) ); // Option 1: Transform to string (for small files) const bodyString = await response.Body.transformToString(); // Option 2: Transform to byte array const bodyBytes = await response.Body.transformToByteArray(); // Option 3: Stream to file (for large files) const writeStream = createWriteStream("/tmp/downloaded-file.txt"); await pipeline(response.Body, writeStream); // Option 4: Process stream chunks const chunks = []; for await (const chunk of response.Body) { chunks.push(chunk); } const fullBody = Buffer.concat(chunks); // Upload from stream const uploadResponse = await client.send( new PutObjectCommand({ Bucket: "my-bucket", Key: "uploaded-file.txt", Body: createReadStream("/path/to/local-file.txt"), ContentType: "text/plain", }) ); // If you don't need the body, destroy the stream to free socket const headResponse = await client.send( new GetObjectCommand({ Bucket: "my-bucket", Key: "my-file.txt" }) ); headResponse.Body.destroy(); // Free the socket ``` ## Abort Controller Cancel in-flight requests using the AbortController interface. ```javascript import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client({ region: "us-west-2" }); // Basic abort example const controller = new AbortController(); // Abort after 5 seconds const timeoutId = setTimeout(() => controller.abort(), 5000); try { const result = await client.send( new PutObjectCommand({ Bucket: "my-bucket", Key: "large-file.bin", Body: largeBuffer, }), { abortSignal: controller.signal } ); clearTimeout(timeoutId); console.log("Upload complete:", result); } catch (e) { if (e.name === "AbortError") { console.log("Request was aborted"); } else { throw e; } } // User-triggered abort in browser const uploadBtn = document.getElementById("upload"); const cancelBtn = document.getElementById("cancel"); let abortController; uploadBtn.addEventListener("click", async () => { abortController = new AbortController(); try { await client.send( new PutObjectCommand({ Bucket, Key, Body: fileInput.files[0] }), { abortSignal: abortController.signal } ); console.log("Upload successful"); } catch (e) { if (e.name === "AbortError") { console.log("Upload cancelled by user"); } } }); cancelBtn.addEventListener("click", () => { abortController?.abort(); }); ``` ## DynamoDB Document Client Simplify DynamoDB operations with native JavaScript types instead of AttributeValue syntax. ```javascript import { DynamoDBClient } from "@aws-sdk/client-dynamodb"; import { DynamoDBDocumentClient, PutCommand, GetCommand, QueryCommand, UpdateCommand, DeleteCommand, BatchWriteCommand, } from "@aws-sdk/lib-dynamodb"; // Create document client with configuration const client = DynamoDBDocumentClient.from(new DynamoDBClient({}), { marshallOptions: { removeUndefinedValues: true, // Remove undefined values convertEmptyValues: false, // Convert empty strings to null convertClassInstanceToMap: true, // Convert class instances to maps }, unmarshallOptions: { wrapNumbers: false, // Keep as JavaScript numbers (or true for NumberValue) }, }); // Put item with native types await client.send( new PutCommand({ TableName: "Users", Item: { userId: "user-123", name: "John Doe", email: "john@example.com", age: 30, tags: ["admin", "active"], metadata: { lastLogin: new Date().toISOString() }, }, ConditionExpression: "attribute_not_exists(userId)", }) ); // Get item const { Item } = await client.send( new GetCommand({ TableName: "Users", Key: { userId: "user-123" }, }) ); console.log(Item.name); // "John Doe" // Query with expressions const { Items } = await client.send( new QueryCommand({ TableName: "Orders", KeyConditionExpression: "userId = :uid AND orderDate > :date", FilterExpression: "orderTotal > :min", ExpressionAttributeValues: { ":uid": "user-123", ":date": "2024-01-01", ":min": 100, }, Limit: 20, }) ); // Update item await client.send( new UpdateCommand({ TableName: "Users", Key: { userId: "user-123" }, UpdateExpression: "SET #name = :name, updatedAt = :now ADD loginCount :inc", ExpressionAttributeNames: { "#name": "name" }, ExpressionAttributeValues: { ":name": "Jane Doe", ":now": new Date().toISOString(), ":inc": 1, }, ReturnValues: "ALL_NEW", }) ); // Batch write await client.send( new BatchWriteCommand({ RequestItems: { Users: [ { PutRequest: { Item: { userId: "user-1", name: "User 1" } } }, { PutRequest: { Item: { userId: "user-2", name: "User 2" } } }, { DeleteRequest: { Key: { userId: "user-old" } } }, ], }, }) ); ``` ## S3 Multipart Upload Use the Upload class for efficient large file uploads with automatic multipart handling. ```javascript import { Upload } from "@aws-sdk/lib-storage"; import { S3Client, S3 } from "@aws-sdk/client-s3"; import { createReadStream } from "fs"; const client = new S3Client({ region: "us-west-2" }); // Upload large file with progress tracking async function uploadLargeFile(bucket, key, filePath) { const upload = new Upload({ client, params: { Bucket: bucket, Key: key, Body: createReadStream(filePath), ContentType: "application/octet-stream", }, queueSize: 4, // Concurrent part uploads partSize: 10 * 1024 * 1024, // 10MB parts (minimum 5MB) leavePartsOnError: false, // Clean up on failure }); upload.on("httpUploadProgress", (progress) => { const percent = Math.round((progress.loaded / progress.total) * 100); console.log(`Upload progress: ${percent}% (${progress.loaded}/${progress.total} bytes)`); }); try { const result = await upload.done(); console.log("Upload complete:", result.Location); return result; } catch (e) { console.error("Upload failed:", e); throw e; } } // Upload from stream of unknown size import { Readable } from "stream"; async function uploadStream(bucket, key, dataStream) { const upload = new Upload({ client, params: { Bucket: bucket, Key: key, Body: dataStream, }, queueSize: 4, partSize: 5 * 1024 * 1024, }); return upload.done(); } // Browser upload with abort support async function browserUpload(file, onProgress, abortSignal) { const upload = new Upload({ client: new S3Client({ region: "us-west-2" }), params: { Bucket: "my-bucket", Key: file.name, Body: file, ContentType: file.type, }, }); upload.on("httpUploadProgress", onProgress); abortSignal?.addEventListener("abort", () => { upload.abort(); }); return upload.done(); } ``` ## S3 Presigned URLs Generate time-limited signed URLs for secure object access without exposing credentials. ```javascript import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3"; import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; const client = new S3Client({ region: "us-west-2" }); // Generate presigned URL for download (GET) const downloadUrl = await getSignedUrl( client, new GetObjectCommand({ Bucket: "my-bucket", Key: "private-file.pdf", }), { expiresIn: 3600 } // 1 hour (default: 900 seconds) ); console.log("Download URL:", downloadUrl); // Generate presigned URL for upload (PUT) const uploadUrl = await getSignedUrl( client, new PutObjectCommand({ Bucket: "my-bucket", Key: "user-uploads/file.txt", ContentType: "text/plain", }), { expiresIn: 600, // 10 minutes signableHeaders: new Set(["content-type"]), // Enforce content-type } ); // Client-side upload using presigned URL const uploadResponse = await fetch(uploadUrl, { method: "PUT", body: fileContent, headers: { "Content-Type": "text/plain" }, }); // Presigned URL with checksum validation const uploadWithChecksum = await getSignedUrl( client, new PutObjectCommand({ Bucket: "my-bucket", Key: "important-file.bin", ChecksumSHA256: computedSha256, }), { expiresIn: 600, unhoistableHeaders: new Set(["x-amz-checksum-sha256"]), } ); // Presigned URL with server-side encryption const uploadWithSSE = await getSignedUrl( client, new PutObjectCommand({ Bucket: "my-bucket", Key: "encrypted-file.txt", ServerSideEncryption: "aws:kms", SSEKMSKeyId: "arn:aws:kms:us-west-2:123456789012:key/abcd-1234", }), { hoistableHeaders: new Set([ "x-amz-server-side-encryption", "x-amz-server-side-encryption-aws-kms-key-id", ]), } ); ``` ## Waiters Wait for resources to reach a desired state before proceeding. ```javascript import { S3Client, CreateBucketCommand, waitUntilBucketExists, waitUntilObjectExists, } from "@aws-sdk/client-s3"; import { EC2Client, RunInstancesCommand, waitUntilInstanceRunning, waitUntilInstanceStatusOk, } from "@aws-sdk/client-ec2"; // Wait for S3 bucket to exist const s3Client = new S3Client({ region: "us-west-2" }); const bucketName = `my-bucket-${Date.now()}`; await s3Client.send(new CreateBucketCommand({ Bucket: bucketName })); await waitUntilBucketExists( { client: s3Client, maxWaitTime: 60 }, // max 60 seconds { Bucket: bucketName } ); console.log("Bucket is ready"); // Wait for S3 object await waitUntilObjectExists( { client: s3Client, maxWaitTime: 120, minDelay: 5, maxDelay: 30 }, { Bucket: bucketName, Key: "my-file.txt" } ); // Wait for EC2 instance const ec2Client = new EC2Client({ region: "us-west-2" }); const { Instances } = await ec2Client.send( new RunInstancesCommand({ ImageId: "ami-0123456789abcdef0", InstanceType: "t3.micro", MinCount: 1, MaxCount: 1, }) ); const instanceId = Instances[0].InstanceId; // Wait for instance to be running await waitUntilInstanceRunning( { client: ec2Client, maxWaitTime: 300 }, { InstanceIds: [instanceId] } ); console.log("Instance is running"); // Wait for instance status checks to pass await waitUntilInstanceStatusOk( { client: ec2Client, maxWaitTime: 600 }, { InstanceIds: [instanceId] } ); console.log("Instance is healthy"); ``` ## Lambda Best Practices Optimize SDK usage in AWS Lambda functions for cold start performance and connection reuse. ```javascript import { DynamoDBClient } from "@aws-sdk/client-dynamodb"; import { DynamoDBDocumentClient, GetCommand, PutCommand } from "@aws-sdk/lib-dynamodb"; import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3"; // Initialize clients OUTSIDE the handler for container reuse const ddbClient = DynamoDBDocumentClient.from( new DynamoDBClient({}) ); const s3Client = new S3Client({}); // Handler function export const handler = async (event) => { try { // Make API calls INSIDE the handler (signs at execution time) const { Item } = await ddbClient.send( new GetCommand({ TableName: process.env.TABLE_NAME, Key: { id: event.id }, }) ); if (!Item) { return { statusCode: 404, body: JSON.stringify({ error: "Item not found" }), }; } // Process S3 object if needed if (Item.s3Key) { const s3Response = await s3Client.send( new GetObjectCommand({ Bucket: process.env.BUCKET_NAME, Key: Item.s3Key, }) ); const content = await s3Response.Body.transformToString(); Item.content = content; } return { statusCode: 200, headers: { "Content-Type": "application/json" }, body: JSON.stringify(Item), }; } catch (error) { console.error("Error:", error); return { statusCode: error.$metadata?.httpStatusCode || 500, body: JSON.stringify({ error: error.message, requestId: error.$metadata?.requestId, }), }; } }; ``` ## SQS Operations Send and receive messages with the SQS client, including batch operations. ```javascript import { SQSClient, SendMessageCommand, SendMessageBatchCommand, ReceiveMessageCommand, DeleteMessageCommand, DeleteMessageBatchCommand, } from "@aws-sdk/client-sqs"; const client = new SQSClient({ region: "us-west-2" }); const queueUrl = "https://sqs.us-west-2.amazonaws.com/123456789012/my-queue"; // Send single message await client.send( new SendMessageCommand({ QueueUrl: queueUrl, MessageBody: JSON.stringify({ orderId: "12345", action: "process" }), MessageAttributes: { Priority: { DataType: "String", StringValue: "high" }, }, DelaySeconds: 0, }) ); // Send batch of messages await client.send( new SendMessageBatchCommand({ QueueUrl: queueUrl, Entries: [ { Id: "1", MessageBody: JSON.stringify({ id: 1 }) }, { Id: "2", MessageBody: JSON.stringify({ id: 2 }) }, { Id: "3", MessageBody: JSON.stringify({ id: 3 }), DelaySeconds: 10 }, ], }) ); // Receive and process messages async function processMessages() { const { Messages = [] } = await client.send( new ReceiveMessageCommand({ QueueUrl: queueUrl, MaxNumberOfMessages: 10, WaitTimeSeconds: 20, // Long polling MessageAttributeNames: ["All"], AttributeNames: ["All"], }) ); for (const message of Messages) { try { const body = JSON.parse(message.Body); console.log("Processing:", body); // Process message... // Delete after successful processing await client.send( new DeleteMessageCommand({ QueueUrl: queueUrl, ReceiptHandle: message.ReceiptHandle, }) ); } catch (error) { console.error("Failed to process message:", message.MessageId, error); } } return Messages.length; } // Continuous polling while (true) { const processed = await processMessages(); if (processed === 0) { await new Promise(r => setTimeout(r, 1000)); } } ``` ## Summary The AWS SDK for JavaScript v3 is designed for building cloud-native applications that interact with AWS services. Its modular architecture enables developers to import only needed functionality, resulting in smaller bundle sizes ideal for serverless deployments and browser applications. The command pattern with bare-bones clients provides optimal tree-shaking, while aggregated clients offer convenience for rapid development. Key integration patterns include using the middleware stack for cross-cutting concerns like logging and metrics, credential providers for secure authentication across different environments, and waiters for handling eventual consistency. Common use cases include serverless applications using Lambda with DynamoDB and S3, web applications with Cognito authentication and presigned URLs for secure file access, data processing pipelines with SQS message queues, and infrastructure automation with EC2 and other compute services. The SDK's async iterator support for pagination, streaming response handling, and AbortController integration make it well-suited for modern JavaScript applications. For production deployments, best practices include initializing clients outside Lambda handlers for connection reuse, using appropriate retry strategies for resilience, properly consuming streams to prevent socket exhaustion, and implementing comprehensive error handling with access to request metadata for debugging.