# AsyncAws S3 Client ## Introduction AsyncAws S3 Client is a high-performance, asynchronous PHP library for interacting with Amazon S3 (Simple Storage Service). Built on the AsyncAws Core framework, this client provides a modern PHP 8.2+ interface for managing buckets, objects, and all S3 operations with non-blocking I/O capabilities. The library is designed to be lightweight, efficient, and fully compatible with AWS S3 API specifications. The client offers comprehensive support for S3 operations including object uploads/downloads, bucket management, multipart uploads, CORS configuration, tagging, access control lists (ACLs), encryption, and advanced features like object versioning and lifecycle management. It provides both synchronous and asynchronous execution patterns, making it suitable for high-throughput applications that need to handle multiple S3 operations concurrently without blocking execution. ## API Documentation and Examples ### Creating an S3 Client Initialize the S3 client with AWS credentials and region configuration to connect to Amazon S3 services. ```php 'us-east-1', 'accessKeyId' => 'YOUR_ACCESS_KEY_ID', 'accessKeySecret' => 'YOUR_SECRET_ACCESS_KEY', ]); // Or use default credential chain (environment variables, IAM roles, etc.) $s3 = new S3Client([ 'region' => 'eu-west-1', ]); // With custom endpoint (e.g., for MinIO or localstack) $s3 = new S3Client([ 'endpoint' => 'http://localhost:9000', 'region' => 'us-east-1', 'accessKeyId' => 'minioadmin', 'accessKeySecret' => 'minioadmin', ]); ``` ### Uploading Objects to S3 Upload files or string content to S3 buckets with support for metadata, encryption, and storage classes. ```php putObject([ 'Bucket' => 'my-bucket', 'Key' => 'documents/report.txt', 'Body' => 'Hello, S3! This is my file content.', 'ContentType' => 'text/plain', ]); // Wait for upload to complete $result->resolve(); echo "Upload status: " . $result->info()['status'] . "\n"; echo "ETag: " . $result->getETag() . "\n"; // Upload file with metadata and encryption $fileContent = file_get_contents('/path/to/local/file.pdf'); $result = $s3->putObject([ 'Bucket' => 'my-bucket', 'Key' => 'files/document.pdf', 'Body' => $fileContent, 'ContentType' => 'application/pdf', 'Metadata' => [ 'author' => 'John Doe', 'department' => 'Engineering', 'version' => '1.0', ], 'ServerSideEncryption' => ServerSideEncryption::AES256, 'StorageClass' => StorageClass::STANDARD_IA, 'CacheControl' => 'max-age=3600', ]); $result->resolve(); // Upload with streaming (memory efficient for large files) $stream = fopen('/path/to/large-file.mp4', 'r'); $result = $s3->putObject([ 'Bucket' => 'my-bucket', 'Key' => 'videos/movie.mp4', 'Body' => $stream, 'ContentType' => 'video/mp4', ]); $result->resolve(); fclose($stream); ``` ### Downloading Objects from S3 Retrieve objects from S3 with conditional requests and range support for partial downloads. ```php getObject([ 'Bucket' => 'my-bucket', 'Key' => 'documents/report.txt', ]); $content = $result->getBody()->getContentAsString(); echo "File content: " . $content . "\n"; echo "Content-Type: " . $result->getContentType() . "\n"; echo "Content-Length: " . $result->getContentLength() . "\n"; // Download with metadata inspection $result = $s3->getObject([ 'Bucket' => 'my-bucket', 'Key' => 'files/document.pdf', ]); $metadata = $result->getMetadata(); echo "Author: " . ($metadata['author'] ?? 'Unknown') . "\n"; echo "Last Modified: " . $result->getLastModified()->format('Y-m-d H:i:s') . "\n"; echo "ETag: " . $result->getETag() . "\n"; // Save to local file $result = $s3->getObject([ 'Bucket' => 'my-bucket', 'Key' => 'videos/movie.mp4', ]); file_put_contents('/path/to/download/movie.mp4', $result->getBody()->getContentAsString()); // Conditional download (only if modified since date) $result = $s3->getObject([ 'Bucket' => 'my-bucket', 'Key' => 'data.json', 'IfModifiedSince' => new \DateTimeImmutable('2024-01-01'), ]); try { $content = $result->getBody()->getContentAsString(); echo "File was modified, downloaded new version\n"; } catch (\AsyncAws\Core\Exception\Http\ClientException $e) { if ($e->getCode() === 304) { echo "File not modified since last check\n"; } } ``` ### Creating and Managing Buckets Create S3 buckets with location constraints and check bucket existence. ```php createBucket([ 'Bucket' => 'my-new-bucket', ]); $result->resolve(); echo "Bucket created: " . $result->getLocation() . "\n"; // Create bucket in specific region $result = $s3->createBucket([ 'Bucket' => 'my-eu-bucket', 'CreateBucketConfiguration' => new CreateBucketConfiguration([ 'LocationConstraint' => BucketLocationConstraint::EU_WEST_1, ]), ]); $result->resolve(); // Check if bucket exists using waiter $waiter = $s3->bucketExists([ 'Bucket' => 'my-new-bucket', ]); if ($waiter->isSuccess()) { echo "Bucket exists and is accessible\n"; } else { echo "Bucket does not exist or is not accessible\n"; } // Wait for bucket to exist (with timeout) $waiter = $s3->bucketExists([ 'Bucket' => 'my-new-bucket', ]); $waiter->wait( null, // Default timeout 3 // Max attempts ); // List all buckets $result = $s3->listBuckets(); foreach ($result->getBuckets() as $bucket) { echo "Bucket: " . $bucket->getName() . " (Created: " . $bucket->getCreationDate()->format('Y-m-d') . ")\n"; } ``` ### Listing Objects in a Bucket List and iterate through objects in a bucket with prefix filtering and pagination support. ```php listObjectsV2([ 'Bucket' => 'my-bucket', ]); foreach ($result->getContents() as $object) { echo "Object: " . $object->getKey() . "\n"; echo " Size: " . $object->getSize() . " bytes\n"; echo " Last Modified: " . $object->getLastModified()->format('Y-m-d H:i:s') . "\n"; echo " ETag: " . $object->getETag() . "\n"; } // List with prefix filter (like folder listing) $result = $s3->listObjectsV2([ 'Bucket' => 'my-bucket', 'Prefix' => 'documents/', 'MaxKeys' => 100, ]); foreach ($result->getContents() as $object) { echo $object->getKey() . "\n"; } // List with delimiter for folder-like structure $result = $s3->listObjectsV2([ 'Bucket' => 'my-bucket', 'Prefix' => 'images/', 'Delimiter' => '/', ]); echo "Files:\n"; foreach ($result->getContents() as $object) { echo " " . $object->getKey() . "\n"; } echo "Folders:\n"; foreach ($result->getCommonPrefixes() as $prefix) { echo " " . $prefix->getPrefix() . "\n"; } // Paginated listing (auto-handles continuation tokens) $result = $s3->listObjectsV2([ 'Bucket' => 'my-bucket', 'MaxKeys' => 10, ]); $count = 0; foreach ($result as $object) { $count++; echo $count . ". " . $object->getKey() . "\n"; } echo "Total objects: " . $count . "\n"; ``` ### Deleting Objects Remove single or multiple objects from S3 buckets efficiently. ```php deleteObject([ 'Bucket' => 'my-bucket', 'Key' => 'documents/old-file.txt', ]); $result->resolve(); echo "Delete status: " . $result->info()['status'] . "\n"; // Delete multiple objects (batch delete) $result = $s3->deleteObjects([ 'Bucket' => 'my-bucket', 'Delete' => new Delete([ 'Objects' => [ new ObjectIdentifier(['Key' => 'file1.txt']), new ObjectIdentifier(['Key' => 'file2.txt']), new ObjectIdentifier(['Key' => 'folder/file3.txt']), ], 'Quiet' => false, // Return info about each deletion ]), ]); foreach ($result->getDeleted() as $deleted) { echo "Deleted: " . $deleted->getKey() . "\n"; } foreach ($result->getErrors() as $error) { echo "Error deleting " . $error->getKey() . ": " . $error->getMessage() . "\n"; } // Delete with version ID (for versioned buckets) $result = $s3->deleteObject([ 'Bucket' => 'my-versioned-bucket', 'Key' => 'document.txt', 'VersionId' => 'ABC123VERSION', ]); $result->resolve(); ``` ### Copying Objects Copy objects within S3 or between buckets without downloading and re-uploading. ```php copyObject([ 'Bucket' => 'my-bucket', 'Key' => 'new-location/document.pdf', 'CopySource' => 'my-bucket/old-location/document.pdf', ]); $result->resolve(); echo "Copy ETag: " . $result->getCopyObjectResult()->getETag() . "\n"; // Copy between buckets with new metadata $result = $s3->copyObject([ 'Bucket' => 'destination-bucket', 'Key' => 'archived/report.pdf', 'CopySource' => 'source-bucket/reports/2024/report.pdf', 'MetadataDirective' => MetadataDirective::REPLACE, 'Metadata' => [ 'archived-date' => date('Y-m-d'), 'original-bucket' => 'source-bucket', ], 'StorageClass' => StorageClass::GLACIER, ]); $result->resolve(); // Copy with conditional requirements $result = $s3->copyObject([ 'Bucket' => 'my-bucket', 'Key' => 'backup/file.txt', 'CopySource' => 'my-bucket/active/file.txt', 'CopySourceIfModifiedSince' => new \DateTimeImmutable('-7 days'), 'CopySourceIfMatch' => '"etag-value"', ]); try { $result->resolve(); echo "File copied successfully\n"; } catch (\AsyncAws\Core\Exception\Http\ClientException $e) { echo "Copy condition not met: " . $e->getMessage() . "\n"; } ``` ### Multipart Upload for Large Files Upload large files efficiently using multipart upload with parallel part uploads. ```php createMultipartUpload([ 'Bucket' => 'my-bucket', 'Key' => 'large-files/video.mp4', 'ContentType' => 'video/mp4', ]); $uploadId = $createResult->getUploadId(); echo "Upload ID: " . $uploadId . "\n"; // Step 2: Upload parts (5MB minimum per part, except last part) $filePath = '/path/to/large-video.mp4'; $fileSize = filesize($filePath); $partSize = 10 * 1024 * 1024; // 10MB parts $parts = []; $handle = fopen($filePath, 'r'); $partNumber = 1; while (!feof($handle)) { $partData = fread($handle, $partSize); $uploadResult = $s3->uploadPart([ 'Bucket' => 'my-bucket', 'Key' => 'large-files/video.mp4', 'PartNumber' => $partNumber, 'UploadId' => $uploadId, 'Body' => $partData, ]); $uploadResult->resolve(); $parts[] = new CompletedPart([ 'ETag' => $uploadResult->getETag(), 'PartNumber' => $partNumber, ]); echo "Uploaded part " . $partNumber . " (ETag: " . $uploadResult->getETag() . ")\n"; $partNumber++; } fclose($handle); // Step 3: Complete multipart upload $completeResult = $s3->completeMultipartUpload([ 'Bucket' => 'my-bucket', 'Key' => 'large-files/video.mp4', 'UploadId' => $uploadId, 'MultipartUpload' => new CompletedMultipartUpload([ 'Parts' => $parts, ]), ]); $completeResult->resolve(); echo "Upload completed! ETag: " . $completeResult->getETag() . "\n"; echo "Location: " . $completeResult->getLocation() . "\n"; // Error handling: Abort multipart upload if something goes wrong try { // ... upload parts ... } catch (\Exception $e) { echo "Error during upload: " . $e->getMessage() . "\n"; echo "Aborting multipart upload...\n"; $abortResult = $s3->abortMultipartUpload([ 'Bucket' => 'my-bucket', 'Key' => 'large-files/video.mp4', 'UploadId' => $uploadId, ]); $abortResult->resolve(); echo "Upload aborted\n"; } ``` ### Object Tagging Manage object tags for categorization, lifecycle rules, and access control. ```php putObjectTagging([ 'Bucket' => 'my-bucket', 'Key' => 'documents/report.pdf', 'Tagging' => new Tagging([ 'TagSet' => [ new Tag(['Key' => 'Department', 'Value' => 'Finance']), new Tag(['Key' => 'Status', 'Value' => 'Active']), new Tag(['Key' => 'Year', 'Value' => '2024']), ], ]), ]); $result->resolve(); // Get object tags $result = $s3->getObjectTagging([ 'Bucket' => 'my-bucket', 'Key' => 'documents/report.pdf', ]); foreach ($result->getTagSet() as $tag) { echo "Tag: " . $tag->getKey() . " = " . $tag->getValue() . "\n"; } // Upload object with tags $result = $s3->putObject([ 'Bucket' => 'my-bucket', 'Key' => 'new-file.txt', 'Body' => 'File content', 'Tagging' => 'Environment=Production&Owner=TeamA&CostCenter=CC123', ]); $result->resolve(); // Delete all tags from object $result = $s3->deleteObjectTagging([ 'Bucket' => 'my-bucket', 'Key' => 'documents/report.pdf', ]); $result->resolve(); echo "Tags removed\n"; ``` ### Checking Object Existence Efficiently check if an object exists without downloading its content using HEAD requests. ```php headObject([ 'Bucket' => 'my-bucket', 'Key' => 'documents/file.txt', ]); try { $result->resolve(); echo "Object exists!\n"; echo "Content-Type: " . $result->getContentType() . "\n"; echo "Content-Length: " . $result->getContentLength() . " bytes\n"; echo "Last Modified: " . $result->getLastModified()->format('Y-m-d H:i:s') . "\n"; echo "ETag: " . $result->getETag() . "\n"; $metadata = $result->getMetadata(); echo "Custom Metadata:\n"; foreach ($metadata as $key => $value) { echo " $key: $value\n"; } } catch (\AsyncAws\Core\Exception\Http\ClientException $e) { if ($e->getCode() === 404) { echo "Object does not exist\n"; } else { throw $e; } } // Use object existence waiter $waiter = $s3->objectExists([ 'Bucket' => 'my-bucket', 'Key' => 'documents/file.txt', ]); if ($waiter->isSuccess()) { echo "Object exists\n"; } else { echo "Object does not exist\n"; } // Wait for object to exist (useful after async upload) $s3->putObject([ 'Bucket' => 'my-bucket', 'Key' => 'new-file.txt', 'Body' => 'content', ]); $waiter = $s3->objectExists([ 'Bucket' => 'my-bucket', 'Key' => 'new-file.txt', ]); $waiter->wait(null, 5); // Wait with 5 max attempts echo "Object is now available\n"; ``` ### CORS Configuration Configure Cross-Origin Resource Sharing (CORS) for browser-based access to S3 resources. ```php putBucketCors([ 'Bucket' => 'my-bucket', 'CORSConfiguration' => new CORSConfiguration([ 'CORSRules' => [ new CORSRule([ 'AllowedHeaders' => ['*'], 'AllowedMethods' => ['GET', 'HEAD', 'PUT', 'POST'], 'AllowedOrigins' => ['https://example.com', 'https://app.example.com'], 'ExposeHeaders' => ['ETag', 'x-amz-request-id'], 'MaxAgeSeconds' => 3600, ]), new CORSRule([ 'AllowedMethods' => ['GET'], 'AllowedOrigins' => ['*'], 'MaxAgeSeconds' => 3000, ]), ], ]), ]); $result->resolve(); echo "CORS configuration updated\n"; // Get CORS configuration $result = $s3->getBucketCors([ 'Bucket' => 'my-bucket', ]); foreach ($result->getCORSRules() as $rule) { echo "Allowed Origins: " . implode(', ', $rule->getAllowedOrigins()) . "\n"; echo "Allowed Methods: " . implode(', ', $rule->getAllowedMethods()) . "\n"; echo "Max Age: " . $rule->getMaxAgeSeconds() . " seconds\n"; echo "---\n"; } // Delete CORS configuration $result = $s3->deleteBucketCors([ 'Bucket' => 'my-bucket', ]); $result->resolve(); echo "CORS configuration removed\n"; ``` ### Object Access Control Lists (ACLs) Manage object-level permissions using access control lists for fine-grained access control. ```php putObject([ 'Bucket' => 'my-bucket', 'Key' => 'public/image.jpg', 'Body' => file_get_contents('/path/to/image.jpg'), 'ACL' => ObjectCannedACL::PUBLIC_READ, ]); $result->resolve(); // Apply canned ACL to existing object $result = $s3->putObjectAcl([ 'Bucket' => 'my-bucket', 'Key' => 'documents/report.pdf', 'ACL' => ObjectCannedACL::PRIVATE, ]); $result->resolve(); // Get object ACL $result = $s3->getObjectAcl([ 'Bucket' => 'my-bucket', 'Key' => 'documents/report.pdf', ]); $owner = $result->getOwner(); echo "Owner: " . $owner->getDisplayName() . " (ID: " . $owner->getID() . ")\n"; echo "Grants:\n"; foreach ($result->getGrants() as $grant) { $grantee = $grant->getGrantee(); echo " Permission: " . $grant->getPermission() . "\n"; echo " Grantee Type: " . $grantee->getType() . "\n"; if ($grantee->getDisplayName()) { echo " Grantee: " . $grantee->getDisplayName() . "\n"; } } // Set custom ACL with specific grants $result = $s3->putObjectAcl([ 'Bucket' => 'my-bucket', 'Key' => 'shared/document.txt', 'AccessControlPolicy' => new AccessControlPolicy([ 'Owner' => new Owner([ 'ID' => 'owner-canonical-id', ]), 'Grants' => [ new Grant([ 'Grantee' => new Grantee([ 'Type' => Type::CANONICAL_USER, 'ID' => 'user-canonical-id', ]), 'Permission' => Permission::READ, ]), ], ]), ]); $result->resolve(); ``` ### Bucket Tagging Apply tags to buckets for cost allocation, organization, and access control policies. ```php putBucketTagging([ 'Bucket' => 'my-bucket', 'Tagging' => new Tagging([ 'TagSet' => [ new Tag(['Key' => 'Environment', 'Value' => 'Production']), new Tag(['Key' => 'Department', 'Value' => 'Engineering']), new Tag(['Key' => 'CostCenter', 'Value' => 'CC-12345']), new Tag(['Key' => 'Project', 'Value' => 'WebApp']), ], ]), ]); $result->resolve(); echo "Bucket tags updated\n"; ``` ### List Multipart Uploads Monitor and manage in-progress multipart uploads to identify incomplete uploads. ```php listMultipartUploads([ 'Bucket' => 'my-bucket', ]); foreach ($result->getUploads() as $upload) { echo "Key: " . $upload->getKey() . "\n"; echo "Upload ID: " . $upload->getUploadId() . "\n"; echo "Initiated: " . $upload->getInitiated()->format('Y-m-d H:i:s') . "\n"; echo "Initiator: " . $upload->getInitiator()->getDisplayName() . "\n"; echo "---\n"; } // List with prefix filter $result = $s3->listMultipartUploads([ 'Bucket' => 'my-bucket', 'Prefix' => 'large-files/', 'MaxUploads' => 50, ]); foreach ($result->getUploads() as $upload) { echo "Incomplete upload: " . $upload->getKey() . " (ID: " . $upload->getUploadId() . ")\n"; } ``` ## Summary and Integration Patterns The AsyncAws S3 Client is designed for modern PHP applications requiring efficient, non-blocking S3 operations. The library excels in scenarios involving high-throughput data processing, concurrent file uploads/downloads, serverless architectures, and applications that need to maintain responsive user experiences while performing S3 operations. The asynchronous nature allows developers to initiate multiple S3 operations simultaneously and handle results as they complete, maximizing resource utilization and minimizing latency. Common integration patterns include using the client in queuing systems for background file processing, implementing media upload/download services with progress tracking, building data pipelines that move large datasets between S3 and other storage systems, and creating backup solutions that efficiently handle thousands of files. The library's support for multipart uploads makes it ideal for video streaming platforms, scientific data processing applications, and any system handling files larger than 100MB. The comprehensive error handling, built-in waiters for eventual consistency, and support for all S3 features including encryption, versioning, and lifecycle management make it a complete solution for production-grade S3 integration in PHP applications.