No boto3. No SDK. Just Files.

How Amazon S3 Files Turns Your Bucket Into a Drive

Amazon S3 Files just went GA. Mount your bucket like a drive, write files with echo, read them with cat — no boto3, no SDK. Here's my hands-on setup from scratch, plus a full pricing breakdown.

Every developer knows the pain — you just want to read a file, but it's in S3, so suddenly you're writing boto3 sessions, get_object calls, and decode wrappers. That tax is gone. Amazon S3 Files just went GA — mount your bucket like a drive and use standard file commands. Here's my setup from scratch.

What Is S3 Files, Actually?

When you enable S3 Files on a bucket, AWS exposes a mount target inside your VPC over NFSv4.1. Mount it on EC2 and /mnt/s3files/ behaves exactly like a local folder — ls, cat, echo, Python's open(), all of it. Writes go straight to S3. No sync step, no copy, no boto3.

Key insight

No boto3. No SDK. No get_object. Just files. The mount and S3 API always point at the same data — no duplication, no delay.

Setting It Up

Step-by-Step Setup — With Every Screenshot

Step 1 — Create the File System

S3 Console → Files → File systems → Create. Point it at your bucket. AWS auto-creates the IAM role and file system ID.

Step 2 — Understand the Sync Config

The sync config controls caching. Mine: 30-day expiry, 128 KB max cached size. Files larger than 128 KB are served directly from S3 at no charge.

Step 3 — Configure Mount Targets

AWS creates one mount target per AZ — all showing Available. Your EC2 connects to the nearest one automatically.

Step 4 — Set the Bucket Policy

Copy the bucket policy AWS provides and paste it into the bucket's Permissions tab — allows the file system's IAM role to read and write.

Step 5 — Security Group: Allow NFS Port 2049

Allow inbound NFS TCP 2049 from your EC2's security group to the mount target. If your mount hangs, this is almost always why.

Step 6 — Attach to EC2

Click Attach → EC2 instance. Pick your instance, set mount path to /mnt/s3files, and AWS gives you the exact command.

Step 7 — Install EFS Utils and Mount

SSH in and confirm amazon-efs-utils 3.0.0 is installed — it ships the mount.s3files helper.

# Create mount point
sudo mkdir /mnt/s3files

# Mount your S3 bucket as a filesystem
sudo mount -t s3files fs-0ee39847848b7c254:/ /mnt/s3files

# Confirm it's mounted
df -h | grep s3files
# 127.0.0.1:/  8.0E  0  8.0E  0%  /mnt/s3files
The Demo

The Demo — Five Commands, Zero boto3

# Write like it's your laptop
echo "Hello from EC2 — no boto3 needed" > /mnt/s3files/hello.txt

# Append to it — this was impossible in native S3 before
echo "Line 2 appended directly" >> /mnt/s3files/hello.txt
echo "Line 3 appended directly" >> /mnt/s3files/hello.txt

# Read it back
cat /mnt/s3files/hello.txt

Prove It's Actually in S3

# Check S3 directly
aws s3 ls s3://dem-s3-files-new-vishnu/
# hello.txt  ✅

# Read directly from S3 — identical content
aws s3 cp s3://dem-s3-files-new-vishnu/hello.txt -
Pricing

Pricing — What You'll Actually Pay

You pay for data actively touched through the filesystem. Everything else in your bucket that you never access through the mount stays at normal S3 Standard rates — untouched by S3 Files charges.

ChargeRateWhen it applies
High-performance storage$0.30 / GB-monthActively accessed files cached on EFS layer
Read charge$0.03 / GBSmall files (<128 KB) served from high-perf storage
Write charge$0.06 / GBAll writes through the mount
Large file reads$0.00Large sequential reads served directly from S3 — free
Underlying S3 storage$0.023 / GB-monthNormal S3 Standard rate — always applies
⚠ The Small File Gotcha

S3 Files has a minimum billable size of 6 KiB. First read of a small file (<128 KB) triggers an import at $0.06/GB. An ls on a directory with 10,000 files is 10,000 metadata reads. Model your I/O patterns before committing.

Worked Example: 100 GB Bucket

Your app reads 10 GB — 94% large files, 6% small. Writes 1 GB back.

ItemCalculationCost
9.4 GB large readsServed direct from S3$0.00
0.6 GB small reads0.6 × $0.03/GB$0.018
1 GB writes1 × $0.06/GB$0.06
100 GB S3 Standard storage100 × $0.023/GB-month$2.30
Total~$2.38 / month

Compare that to a dedicated 100 GB EFS filesystem at $0.30/GB-month = $30/month. The savings come because most of your data never touches the high-performance tier.

S3 Files vs EFS — When to Use What

ScenarioBest Choice
Data already in S3, mostly large filesS3 Files — no duplication, lower cost
Millions of tiny files accessed constantlyEFS — no per-small-file import overhead
Need both S3 API and file access simultaneouslyS3 Files — both views, one copy of data
ML training on large dataset filesS3 Files — large reads are free
Migrating on-prem NAS workloadsFSx — purpose-built NAS compatibility
Running a database (MySQL, Postgres)Neither — databases need strong fsync semantics
Things to Know

Things to Know Before You Go All In

Linux and NFSv4.1 only — no Windows support on EC2.

Same VPC required — EC2 must be in the same VPC as the mount targets, or have peering configured.

Rename and move are expensive — renaming a directory is metered per object with that prefix. Moving 50,000 files = 50,000 individual operations at minimum billable size.

Not for databases — do not run MySQL or PostgreSQL against an S3 Files mount.

Cold data feels it — first access to data not in cache hits S3 latency. Workloads that scan huge datasets once and never revisit them will notice.

Security group port 2049 — if your mount command hangs, check inbound NFS 2049 first. It's almost always that.

The Bigger Picture

Available now

S3 Files is GA in 34 AWS Regions. Go to S3 → Files → File systems in your console. Takes under 5 minutes to set up.