Cost Optimization Journey

Cost Optimization Journey: Migrating a Monolithic Application to AWS
Migrating a monolithic application to the cloud is a transformative journey, blending innovation with complexity. For our network management system, moving from on-premises to AWS unlocked scalability and flexibility but also introduced unexpected costs. With over 3,000 EC2 instances and 60+ RDS instances, our annual AWS bill approached $3 million. This blog details our phased cost optimization journey, sharing strategies that reduced costs by over 50% and lessons for organizations navigating similar migrations.
Application Overview
Our application, a network management system, manages firewall devices, prioritizing customer data security. Data resides in RDS databases and EFS file systems. To ensure isolation, each customer receives a dedicated EC2 instance, a separate RDS schema, and an EFS access point. The application is deployed in a VPC with public subnets for EC2 instances, private subnets for RDS, and EFS for high-availability file storage. We maintain two environments: production and staging (used for QA and development).
The Cost Challenge
As customer onboarding, trials, demos, development, and QA activities scaled, our infrastructure grew rapidly. Within a year, we were running over 3,000 EC2 instances (c5.2xlarge) and 60+ RDS instances (db.m6g.8xlarge). This included:
Paid customer instances
Free/trial instances for potential customers
Demo servers for marketing and sales
Development and QA instances for daily activities
Automation scripts creating and destroying instances daily
The resulting $3 million annual bill prompted us to prioritize cost optimization.
Cost Optimization Phases
Phase 1: Adopting Savings Plans and Reserved Instances
We began by leveraging AWS Savings Plans and Reserved Instances (RIs). After comparing Compute Savings Plans (flexible across instance types, containers, and Lambda) with EC2 Instance Savings Plans (tied to specific instance types), we chose Compute Savings Plans to accommodate potential instance type changes driven by performance needs. For RDS, we purchased one- and three-year RIs. This reduced costs by approximately 15%.
Phase 2: RDS Downsizing
Using CloudWatch and Datadog for resource monitoring, including RDS tenant-level metrics, we observed:
CPU utilization below 20%
60% free storage
Connection pools of 60–100 per instance
We downsized RDS instances from db.m6g.8xlarge (32 vCPUs) to db.r6g.4xlarge (16 vCPUs), prioritizing memory-optimized instances to maintain database connection limits.[^1] This reduced RDS costs by 30%.
Phase 3: Increasing RDS Efficiency
Further analysis showed RDS instances used only 2,000 database connections (out of capacity) and 40% of storage. By increasing the number of EC2 instances per RDS instance by 10%, we raised connection and storage usage by 10–15% while staying within safe limits, reducing RDS costs by 10%. Additionally, we configured Multi-AZ RDS only for production, using Single-AZ for staging, saving 5%.
Phase 4: EC2 Optimization
For EC2, we replaced x86-based c5.2xlarge instances with AMD-based equivalents, saving 9% with no performance impact. By externalizing a Java process to a container, we reduced CPU usage to 15–20%, enabling downsizing to m5.xlarge (4 vCPUs, 16 GB). In staging, we implemented Lambda functions to shut down non-essential instances on weekends, unless tagged for continuous operation, saving 8%.
Summary of Savings
Phase | Strategy | Estimated Savings |
1 | Compute Savings Plans & RDS RIs | 15% |
2 | RDS Downsizing | 30% |
3 | Increased RDS Load & Single-AZ | 15% |
4 | EC2 AMD, Downsizing, Shutdowns | 17% |
Total | \>50% |
Next Steps
Cloud cost optimization is ongoing. We are testing m5.xlarge instances and planning a migration to AWS Graviton processors, which promise 10–15% additional savings and improved performance. This requires recompiling and redeploying Java, C, and Perl/Python processes, a significant effort underway.
AWS Support
AWS’s Migration Acceleration Program (MAP) provided discounts and expert guidance. AWS teams used tools like AWS Cost Explorer and Trusted Advisor to recommend optimizations, enhancing our efforts.
Key Takeaways
Mindset Shift: Cloud migration requires a cost-conscious approach. Every dollar saved matters.
Monitoring: Robust dashboards (e.g., CloudWatch, Datadog) are critical for informed downsizing decisions.
Collaboration: Work closely with AWS teams for recommendations and discounts.
Continuous Improvement: Optimization is never complete—regularly analyze bills and top services.
We encourage you to share your cost optimization experiences in the comments. What strategies have worked for you? Have you explored Graviton processors or other AWS tools? Let’s learn from each other!
[^1]: Maximum database connections depend on instance memory, calculated as: max_connections = GREATEST({DBInstanceClassMemory / 31457280}, 10)
for MariaDB.
P.S: This post was polished with the assistance of Grok, created by xAI.
Subscribe to my newsletter
Read articles from Saravan T directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
