Amazon S3 (Simple Storage Service) is a fully managed object storage service from AWS. It stores any amount of data and makes it available via simple API calls, forming the backbone of file storage for serverless applications.
S3 replaces self-managed file servers with a fully managed service that handles storage, availability, durability, and access control out of the box.
Store any number of objects, each up to 5 TB, across unlimited buckets. S3 handles replication and durability (99.999999999%) automatically with no capacity planning required.
Trigger Lambda functions automatically when objects are created, modified, or deleted. Build event-driven pipelines for image processing, data transformation, and more.
Fine-grained permissions via bucket policies, ACLs, and IAM roles. Server-side encryption with S3-managed keys, KMS keys, or customer-provided keys. Presigned URLs for temporary access.
Serve HTML, CSS, JavaScript, and media files directly from S3 with a public URL. Combine with CloudFront for global CDN distribution and custom domain support.
Choose from Standard, Infrequent Access, One Zone IA, Glacier, and Deep Archive. Lifecycle rules automatically transition or expire objects to minimize costs.
Enable versioning to keep every revision of every object. Cross-region replication copies objects to buckets in other regions for disaster recovery and compliance.
S3 is a managed object store. You upload files (objects) into buckets, and AWS handles storage, replication, and delivery so you never manage disk infrastructure.
You create a bucket and upload objects via the AWS Console, CLI, SDK, or a presigned URL. Each object gets a unique key (path) within the bucket.
S3 replicates the object across multiple facilities within your chosen region, providing 99.999999999% durability. Lifecycle rules can transition objects between storage classes automatically.
Retrieve objects via the S3 API, a public URL, or a presigned URL. S3 event notifications can trigger Lambda functions, SQS queues, or SNS topics whenever objects change.
S3 connects directly with many AWS services, making it a central hub for data in your architecture:
Trigger functions when objects are created, modified, or deleted. The most common integration for serverless file processing.
Distribute S3 content globally via CDN edge locations for low-latency downloads and static site hosting.
Run SQL queries directly against data stored in S3 without loading it into a database. Great for analytics and log analysis.
Control access with IAM policies and bucket policies. Encrypt objects at rest with KMS-managed keys or S3-managed keys.
Receive S3 event notifications in queues or topics to decouple processing from uploads.
Archive infrequently accessed data at a fraction of the cost. Lifecycle rules automate transitions from Standard to Glacier storage.
The Serverless Framework makes it simple to create S3 buckets and wire up Lambda functions to respond to S3 events. Define your bucket and event triggers directly in serverless.yml:
service: my-s3-app
provider:
name: aws
runtime: nodejs22.x
functions:
# Trigger on new uploads
processUpload:
handler: handler.processUpload
events:
- s3:
bucket: my-uploads
event: s3:ObjectCreated:*
# Filter by prefix and suffix
processImage:
handler: handler.processImage
events:
- s3:
bucket: my-uploads
event: s3:ObjectCreated:*
rules:
- prefix: images/
- suffix: .jpg
# Generate presigned upload URL
getUploadUrl:
handler: handler.getUploadUrl
events:
- httpApi:
path: /upload-url
method: getThe framework handles all CloudFormation resource creation: S3 bucket configuration, Lambda permissions, IAM roles, and event notification setup. It also supports existing buckets, custom bucket policies, CORS configuration, and lifecycle rules.
S3 imposes no practical limits on the number of objects or total data stored in a bucket. A single AWS account can hold hundreds of buckets, each containing petabytes of data, while still providing low-latency access to every object. You never provision capacity or worry about running out of disk space.
Getting started takes minutes. Create a bucket, choose access settings, and start uploading. AWS handles replication across multiple facilities, hardware failures, and capacity planning. There are no servers to patch, no disks to monitor, and no backup jobs to schedule.
S3 integrates natively with Lambda for event-driven processing, CloudFront for global CDN delivery, Athena for SQL analytics, RDS for database backups, and dozens of other services. These integrations let you build complex workflows with very little custom code.
Choose from Standard, Infrequent Access, One Zone IA, Intelligent-Tiering, Glacier Instant Retrieval, Glacier Flexible Retrieval, and Glacier Deep Archive. Lifecycle policies automate transitions between classes, so frequently accessed data stays fast while archived data costs a fraction of a cent per GB.
S3 is the right choice for most serverless storage needs, but these constraints are worth understanding upfront.
S3 pricing is pay-per-use, which works well at small scale. As data accumulates in production, storage and data transfer costs can rise sharply. Implement lifecycle rules early to expire or transition objects you no longer need.
S3 offers many storage classes, each with different pricing, retrieval latency, and minimum storage duration requirements. Choosing the wrong class can lead to unexpected charges. Spend time understanding which classes fit your access patterns before committing.
Because S3 never says no to more data, teams can make poor decisions about what to store and why. Without periodic reviews of bucket contents and spending, costs can spiral. Establish tagging conventions and audit storage regularly.
S3 does not automatically categorize or label your objects. Understanding what is stored in a bucket requires tagging objects at creation time. Set up a tagging convention early and enforce it across all applications that write to S3.
Once created, a bucket is permanently bound to its region. To relocate data, you must create a new bucket in the target region and copy objects over, incurring transfer charges. Plan your region strategy before creating production buckets.
S3 pricing is based on storage volume, number of requests, and data transfer. Costs vary by storage class and region.
5 GB
S3 Standard storage
20K
GET requests / month
2K
PUT requests / month
| Service | Price |
|---|---|
| S3 Standard storage | $0.023 / GB / month |
| S3 Infrequent Access | $0.0125 / GB / month |
| S3 One Zone IA | $0.01 / GB / month |
| S3 Glacier Instant Retrieval | $0.004 / GB / month |
| S3 Glacier Flexible Retrieval | $0.0036 / GB / month |
| S3 Glacier Deep Archive | $0.00099 / GB / month |
| PUT, COPY, POST, LIST requests | $0.005 / 1,000 requests |
| GET, SELECT requests | $0.0004 / 1,000 requests |
| Data transfer out (internet) | $0.09 / GB (lower at volume) |
| Data transfer out (CloudFront) | Free (CloudFront pricing applies) |
Storage (60 days): 3 TB x $0.023/GB = $65.80/month
PUT requests: 3M x $0.005/1K = $15.00/month
GET requests: 3M x $0.0004/1K = $1.20/month
Data transfer out: 1.5 TB x $0.09/GB = $128.75/month
Total: approximately $210.75/month. Use lifecycle rules to expire old objects and CloudFront to reduce transfer costs.
Retrieval fees apply on top of storage costs. Choose a retrieval tier based on urgency.
| Storage Class | Expedited | Standard | Bulk |
|---|---|---|---|
| Glacier Flexible Retrieval | $0.03 / GB | $0.01 / GB | $0.0025 / GB |
| Glacier Deep Archive | N/A | $0.02 / GB | $0.0025 / GB |
See the official S3 pricing page for current regional rates. Pricing varies by region, with GovCloud regions costing nearly twice as much as us-east-1.
Use S3 when you need to store user-uploaded files (images, videos, documents), host static websites or single-page applications, build event-driven processing pipelines triggered by file uploads, store application data that does not fit in a database, or archive logs and audit trails for long-term retention.
Consider alternatives when you need a file system for a running EC2 instance (use Amazon EBS), need ultra-low-latency global downloads (add Amazon CloudFront in front of S3), or need a relational database (use Amazon RDS). For long-term archival at the lowest cost, use S3 Glacier storage classes with lifecycle rules.
S3 is the default choice for object storage on AWS, but other services may be a better fit depending on your workload, budget, or cloud provider.
Block storage attached to EC2 instances. Not object storage. Use for databases, application file systems, or any workload that requires a mounted disk on a running instance.
Shared file system accessible from multiple EC2 instances and Lambda functions simultaneously. Use when several compute resources need to read and write the same files concurrently.
Archive storage starting at $0.004/GB per month. Use for backups, compliance archives, and data that is rarely accessed. Retrieval can take minutes to hours depending on the tier.
S3-compatible object storage at roughly one quarter the cost of S3 Standard. Storage runs about $0.005/GB per month with $0.01/GB data transfer. A strong option for cost-sensitive workloads.
Google's equivalent to S3 with similar storage classes and pricing. Use if your infrastructure already runs on Google Cloud Platform or if you need tight integration with BigQuery and other GCP services.
Microsoft's object storage service with Hot, Cool, and Archive tiers. Use if your organization is already on Azure or needs integration with Azure-native services like Azure Functions and Cosmos DB.
Key service limits to keep in mind when designing your storage architecture. Most soft limits can be increased through an AWS support request.
| Resource | Limit | Notes |
|---|---|---|
| Max object size | 5 TB | Single PUT limited to 5 GB. Use multipart upload for larger objects. |
| Buckets per account | 100 | Adjustable up to 1,000 via support request. |
| Objects per bucket | Unlimited | No cap on the number of objects stored in a single bucket. |
| Bucket name | 3 to 63 characters | Must be globally unique across all AWS accounts. |
| Request rate per prefix | 5,500 GET/s, 3,500 PUT/s | Distribute keys across prefixes to scale beyond these limits. |
| Lifecycle rules per bucket | 1,000 | Each rule can target objects by prefix, tag, or size. |
| Tags per object | 10 | Key-value pairs for cost allocation and access control. |
| Metadata per object | 2 KB total | Combined size of all user-defined metadata headers. |
| Bucket region | Fixed at creation | Buckets cannot be moved between regions after creation. |
Common questions about Amazon S3.
Deploy an S3 bucket with Lambda event triggers in minutes using the Serverless Framework.