Automate S3 Cost Savings with AWS Config & Lifecycle Rules

Utkarsh RastogiUtkarsh Rastogi
4 min read

Overview

This article explains how AWS Config can instantly identify newly created or incorrectly configured S3 buckets and initiate a Lambda function to implement lifetime rules. It provides an automatic compliance method for optimizing S3 storage expenses in a serverless, hands-off manner.

Problem Statement

Unused items and outdated versions of S3 buckets are created in many AWS environments without uniform lifetime standards, which results in needless storage expenses. Organizations lose out on moving data to more affordable levels like STANDARD_IA or GLACIER when automation isn't used. For cost reduction and compliance, lifecycle rules must be followed when creating or modifying buckets.

Solution Architecture

How AWS Config Triggers Work

Evaluation Model

  • Triggered when resources are created or modified.

  • Re-evaluates resources whenever a change is detected (e.g., lifecycle policy updates)

  • Does not evaluate for events like object uploads or data mutations (e.g., PutObject)

Lambda Logic

To make sure that specified policies are being followed, the Lambda function examines an S3 bucket's lifecycle rules. To preserve the desired lifecycle behavior, it adds any missing transition or cleaning rules while keeping the ones that are already there. In order to guarantee that the S3 bucket continues to adhere to the designated lifecycle management standards, it lastly transmits the compliance status back to AWS Config.

Code Snippet

try:
                  print("Checking if lifecycle configuration exists...")
                  response = s3.get_bucket_lifecycle_configuration(Bucket=bucket_name)
                  existing_rules = response['Rules']
                  print(f"Found {len(existing_rules)} existing lifecycle rule(s).")
              except ClientError as e:
                  if e.response['Error']['Code'] == 'NoSuchLifecycleConfiguration':
                    print(f"No lifecycle configuration found for {bucket_name}.")
                  else:
                    print(f"Unexpected error occurred: {e}")
                    raise e

              # Check if a transition rule already exists
              has_transition = any('Transitions' in rule or 'NoncurrentVersionTransitions' in rule for rule in existing_rules)

              if not has_transition:
                  print(f"No transition rule found. Appending standard transition rule...")
                  transition_rule = {
                      'ID': 'TransitionOnly',
                      'Filter': {'Prefix': ''},
                      'Status': 'Enabled',
                      'Transitions': [
                          {'Days': 90, 'StorageClass': 'STANDARD_IA'}
                      ],
                      'NoncurrentVersionTransitions': [
                          {'NoncurrentDays': 90, 'StorageClass': 'STANDARD_IA'}
                      ]
                  }
                  existing_rules.append(transition_rule)
                  compliance_type = "NON_COMPLIANT"
                  annotation = f"Transition rule was missing. Appended standard transition rule to {bucket_name}."
              else:
                  print(f"Transition rule already exists. No changes required.")

              # Always make sure cleanup rule is present too
              has_cleanup = any(
                  'AbortIncompleteMultipartUpload' in rule or
                  ('Expiration' in rule and 'ExpiredObjectDeleteMarker' in rule['Expiration'])
                  for rule in existing_rules
              )

              if not has_cleanup:
                  print(f"Cleanup rules missing. Appending cleanup rule...")
                  cleanup_rule = {
                      'ID': 'CleanupExpiredAndMultipart',
                      'Filter': {'Prefix': ''},
                      'Status': 'Enabled',
                      'AbortIncompleteMultipartUpload': {
                          'DaysAfterInitiation': 1
                      },
                      'Expiration': {
                          'ExpiredObjectDeleteMarker': True
                      }
                  }
                  existing_rules.append(cleanup_rule)
                  compliance_type = "NON_COMPLIANT"
                  annotation += " Cleanup rules were also added."
              else:
                  print(f"Cleanup rules already exist.")

              # If any rules were added, put updated config
              if compliance_type == "NON_COMPLIANT":
                  print("Updating bucket lifecycle configuration...")
                  s3.put_bucket_lifecycle_configuration(
                      Bucket=bucket_name,
                      LifecycleConfiguration={'Rules': existing_rules}
                  )

              # Report to AWS Config
              print(f"Reporting to AWS Config...")
              config.put_evaluations(
                  Evaluations=[
                      {
                          'ComplianceResourceType': 'AWS::S3::Bucket',
                          'ComplianceResourceId': bucket_name,
                          'ComplianceType': compliance_type,
                          'Annotation': annotation,
                          'OrderingTimestamp': invoking_event['configurationItem']['configurationItemCaptureTime']
                      }
                  ],
                  ResultToken=event['resultToken']
              )

Config Rule

S3 Rules

Cost Optimization Impact

In order to reduce expenses, the Lambda-based S3 Lifecycle Enforcer makes sure that items are moved to more economical storage classes (such as Deep Archive or Glacier) and removed when they are no longer required. Because of this, the storage expenses that come with keeping out-of-date or rarely viewed data in regular S3 storage are decreased. By implementing uniform lifecycle policies, it reduces human error and guarantees system efficiency, preventing needless storage costs.

Cost Breakdown for AWS Config Monitoring of S3 Buckets

Here's a breakdown of the AWS Config cost for monitoring S3

Configuration Items:

  • Price: $0.003 per configuration item recorded.

  • Example: 100 S3 buckets, each with 2 changes per month = 200 configuration items.

  • Cost: 200 * $0.003 = $0.60.

Custom Rule Evaluations:

  • Price: $0.001 per evaluation.

  • Example: 200 rule evaluations.

  • Cost: 200 * $0.001 = $0.20.

Total Monthly Cost:

  • Configuration Items Cost: $0.60

  • Custom Rule Evaluations Cost: $0.20

  • Total: $0.60 + $0.20 = $0.80

So, the estimated total monthly cost for monitoring 100 S3 buckets with 2 changes per month and custom rule evaluations is ~$0.80.

Deploy with CloudFormation

GitHub Repo for Full Code: https://github.com/Utkarshlearner/aws-s3-lifecycle-enforcer

Deployment Steps Make sure the AWS CLI is configured (aws configure) before running these commands.

  1. Create IAM Role

    aws cloudformation create-stack --stack-name S3-LifeCycle-IAM-Stack --template-body file://iam.yaml --capabilities CAPABILITY_NAMED_IAM

  2. Create Lambda

    aws cloudformation create-stack --stack-name S3-LifeCycle-Lambda-Stack --template-body file://lambda.yaml --capabilities CAPABILITY_NAMED_IAM

  3. Create Config Rules

    aws cloudformation create-stack --stack-name S3-LifeCycle-ConfigRules-Stack --template-body file://configrules.yaml --capabilities CAPABILITY_NAMED_IAM

Conclusion

Utilizing AWS Config and Lambda, the S3 lifespan Enforcer provides a robust, automated way to guarantee adherence to S3 lifespan guidelines. This solution guarantees that your S3 storage stays cost-optimized while drastically reducing manual labor and human error by automating the transferring and cleaning up of objects based on established rules. Without requiring human intervention, AWS Config enforces the required lifecycle rules by continuously monitoring configuration changes and triggering the Lambda function.

Additional References

https://aws.amazon.com/config/pricing/


"Thank you for reading! If you found this blog helpful, don't forget to subscribe and follow for more insightful content. Your support keeps me motivated to bring you valuable insights. Stay updated and never miss out on our latest posts. Feel free to leave comments or suggestions for future topics. Happy learning!"

https://awslearner.hashnode.dev/amazon-web-services-via-category

https://awslearner.hashnode.dev/aws-beginner-level-project-ideas

0
Subscribe to my newsletter

Read articles from Utkarsh Rastogi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Utkarsh Rastogi
Utkarsh Rastogi

๐Ÿ‘จโ€๐Ÿ’ป AWS Cloud Engineer | Around 6 years of Corporate Experience | Driving Innovation in Cloud Solutions ๐Ÿ”ง Day-to-Day Tasks: Specialize in creating AWS infrastructure for Migration Projects. Leveraging services such as S3, SNS, SQS, IAM, Lambda, System Manager, Kinesis, OpenSearch, Cognito, Storage Gateway, Cloud Watch, API Gateway, AWS Event Scheduler, Secret Manager, ECS, Application Load Balancer, VPC among others. Additionally, I excel in crafting Splunk Dashboards and implementing alerting mechanisms for Cloud Watch logs to monitor failures. My approach involves constructing AWS infrastructure using the Serverless framework and Cloud Formation templates, while automating tasks through Boto3 (Python Scripting) Lambdas. ๐ŸŽฏ Passion: I am deeply passionate about continuously learning new technologies and eagerly anticipate the transformative impact of cloud computing on the tech landscape. ๐Ÿ“ง Connect: Feel free to reach out to me at awslearningoals@gmail.com. Let's connect and explore potential collaborations! https://www.linkedin.com/in/rastogiutkarsh/