Building on the Cloud: AWS Services and a Hands-On Java App Deployment


Introduction
Hello there! 👋 The past week marked Week 7 of my DevOps journey, and this time, I started by revisiting concepts of Computer Networks, as it’s been a while since I looked back at them. I continued my AWS learning by exploring services like EFS, Auto Scaling Groups, S3, and RDS. To revise and implement my previous learnings, I built a project called Vprofile
, where I used AWS services to deploy a Java-based application using EC2, S3, Load Balancers, Route 53, and Auto Scaling Groups.
Exploring AWS Storage Services
Elastic File System (EFS)
AWS has many options for storage, and as a DevOps learner, I should be proficient in using these storage services. The first one is EFS or Elastic File System
. It is a fully managed, scalable NFS file system that can simultaneously be mounted on multiple EC2 instances. EFS is like a virtual hard disk which can store and protect our data from EC2 instances, Lambda, ECS, EKS, etc. Some of its key features are
Shared file storage for Linux workloads
Elastic & Scalable: Automatically grows/shrinks as files are added/removed
Supports POSIX-compliant permissions (Linux-style)
Accessible across multiple AZS
Use Cases: Web servers, CMS, containers (EKS, ECS), CI/CD, ML, shared file access
After covering some theoretical knowledge of EFS, I jumped into how to create and mount one in an EC2 instance. I followed the following high-level steps to accomplish this-
Create an EFS File System in the AWS Management Console
Create Access Point for EFS (Recommended)
Configure the Security Group to allow an NFS type inbound rule for port 2049
Configure EC2 Instance
For Amazon Linux 2 / Amazon Linux AMI
sudo yum update -y sudo yum install -y amazon-efs-utils
For Debian/Ubuntu
sudo apt update sudo apt install -y amazon-efs-utils
Create a Mount Directory inside the EC2 instance (eg, /mnt/efs)
Mount EFS Temporarily (Test)
sudo mount -t efs -o tls,accesspoint=access_point_id efs_file_system_id:/ /mnt/efs
Verify using:
df -h
Mount Persistently Using
/etc/fstab
Test and check using:
sudo mount -a df -h
Simple Storage Service (S3)
After EFS, I moved to Simple Storage Service (S3)
. It is a service that stores files of different types, like Photos, Audio, and Videos, as Objects, providing more scalability and security. The main components of S3 are Bucket, Key, and Object. S3 has many use cases, like-
Static website hosting
Backup and restore
Data archiving
Big data analytics
Disaster recovery
and many more. The thing I liked about S3 is that it is really simple to get started with S3. There’s not much hassle in using S3. I learned about different classes of S3.
I also learned about versioning, access control using bucket policies, IAM policies, ACLs and Lifecycle policies, which is how we can configure the transition of objects between storage classes based on how old they are. I also learned to host a static website using S3 buckets and how disaster management is done using replication rules.
Relational Database Service (RDS)
The next storage service was RDS or Relational Database Service
and unlike S3, it wasn’t easy for me to grasp. It is a managed Relational Database Service supporting engines such as Mysql, Postgresql, Mariadb, Oracle, SQL Server, and Amazon Aurora. Some of its key features I came to know-
Automated Backups: Point‑in‑time recovery with daily snapshots and transaction logs
Multi‑AZ Deployments: Synchronous standby in a separate Availability Zone for failover
Read Replicas: Scale read workloads and offload reporting
Storage Auto Scaling: Automatically increase storage when you approach capacity limits
Encryption at Rest & In Transit: AWS KMS–managed keys or customer‑managed keys
Monitoring & Metrics: Integration with CloudWatch, enhanced monitoring, Performance Insights
Maintenance Windows: Automatic minor version upgrades during a defined time window
It has many use cases in real-world applications like web and mobile applications that require relational schema, analytics and reporting with read-heavy queries, etc. I learned how to create an RDS instance, how to configure it, how to delete it and learned about a few best practices for RDS.
Project Highlight: Vprofile Deployment on AWS
And to practice and implement a few services which I learned, I made a project to deploy a multi-tier Java application. Building a project is always the fun part of learning (till you encounter an unsolvable error and lose all your motivation 🙂). The services I used for this project are-
EC2 Instances
- VM for Tomcat, RabbitMQ, MemCache, MySQLElastic Load Balancer (ELB)
- Nginx Load Balancer replacementAuto Scaling Groups
- Automation for VM scalingS3/EFS Storage
- Shared storageRoute 53
- Private DNS serviceIAM
- To create a user and a role for S3 full access
The architecture of the project-
The user accesses the application via a custom domain registered on GoDaddy, which routes traffic through an HTTPS-enabled Elastic Load Balancer (ELB). The ELB forwards requests on port 8080 to an Auto Scaling Group of EC2 instances, ensuring high availability and performance. These instances interact with backend services, including Rabbitmq for messaging, Mysql for database operations, and Memcached for caching, all hosted on separate EC2 instances within a secure group. The setup also integrates Route 53 for DNS management (with Private Zones) and uses S3 buckets to store application artifacts, making the deployment both scalable and modular.
While there's a lot more depth to how each service was integrated and managed, for now, I’ll focus on outlining the key steps of the overall process to give a clear high-level view.
Steps involved-
Set Up Security Groups
Created separate security groups for the Load Balancer, Tomcat app server, and backend services (Mysql, Memcache, Rabbitmq). Each group had tightly scoped rules- like allowing port 8080 access from the ELB SG or enabling internal backend communication via private IPs and ports.
Create a Key Pair for SSH Access
Generated an AWS key pair to securely SSH into EC2 instances during setup and troubleshooting.
Launch EC2 Instances for Backend Services
Used Amazon Linux AMI to launch instances for Mysql, Memcached, and Rabbitmq. Applied startup scripts from the
vprofile
repo during instance creation and attached the appropriate backend security group.Configure Private DNS with Route 53
Created a Private Hosted Zone in Route 53 to map backend service hostnames (e.g.,
db01.vprofile.in
) to their private IPs. This made service-to-service communication IP-independent.Test DNS Resolution
Verified hostname resolution by SSHing into the Tomcat instance and pinging other backend services using their DNS names (e.g.,
ping -c 4 db01.vprofile.in
).Build the Java Artifact Using Maven
Cloned the
vprofile
source code locally, built the.war
file usingmvn install
, and verified the build in thetarget
directory.Set Up S3 for Artifact Storage
Created an S3 bucket to store the built
.war
file. Used AWS CLI to upload the artifact. An IAM user with S3 access was configured locally for this step.Assign IAM Role to Tomcat EC2 Instance
Created an IAM role with full S3 access and attached it to the Tomcat instance so it could pull the
.war
file directly during deployment without AWS credentials.Deploy the Application on Tomcat
Pulled the
.war
from S3 into the Tomcat EC2 instance, replaced the default webapps directory with the new artifact, and restarted the Tomcat service to reflect the changes.Set Up Application Load Balancer (ALB)
Created a target group pointing to the Tomcat instance on port 8080 and attached it to a newly created ALB. Enabled both HTTP/HTTPS (if SSL cert is present). The ALB DNS name served as the public entry point.
Test Application via Load Balancer
Verified successful deployment by accessing the app through the ELB DNS URL. Ensured all backend services were communicating correctly through configured hostnames.
Configure Auto Scaling with Launch Template
Created an AMI from the configured Tomcat instance, set up a launch template with required settings and IAM role, and finally created an Auto Scaling Group to automatically manage Tomcat app server instances.
For now, I skipped purchasing a domain name and configuring the ELB DNS name with it. Doing so would have given this project a more realistic approach, but that can be done in future projects.
Resources I used-
Challenges I faced-
1️⃣ Difficulty in understanding RDS instances
- I couldn’t grasp the concept of RDS instantly. I learned how to create an RDS instance and how to modify/delete it, but I have to figure out how to implement it.
Solution - I will try to create a mini project to implement and understand the flow of how RDS is used.
2️⃣ SSL-related error mid-upload when using the AWS CLI
- When I was trying to copy the artifact from the local system to the S3 bucket using AWS CLI, I was getting an SSL-related error, and my connection with the S3 bucket kept getting interrupted. I thought it was a Git Bash issue as I was using Bash. I tried using PowerShell but kept getting the same error. Solution- As the connection was getting interrupted during the artifact upload, I came to know that it could be a network-related issue. So I changed the wifi from my hostel wifi to my personal hotspot, and now there was no error, and the artifact was uploaded successfully.
What’s Next?
From next week, my end-of-semester exams are going to be held, so it would be difficult to continue learning new services and publish articles. So, I will go through all the AWS services I have learnt till now and revise them
Let’s Connect!
🔗 My LinkedIn 🔗 My GitHub
If you have any recommended resources, better approaches to my challenges, or insights, I’d love to hear them! Drop your thoughts in the comments.
Have a wonderful day!
Subscribe to my newsletter
Read articles from Akshansh Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Akshansh Singh
Akshansh Singh
Driven by curiosity and a continuous learning mindset, always exploring and building new ideas.