011 - Mastering the Cloud: A DevOps Guide to AWS Basics & CLI Automation

Hamza IqbalHamza Iqbal
18 min read

Welcome to our DevOps series! In this installment, we dive into the world of Amazon Web Services (AWS), the leading cloud platform. We'll cover fundamental AWS concepts, get hands-on with the AWS Command Line Interface (CLI), and culminate in a practical example of deploying a Java application to an EC2 instance using Jenkins.

Part 1: Understanding the AWS Landscape

1. Introduction to AWS

Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon. It offers a broad set of global compute, storage, database, analytics, application, and deployment services that help organizations move faster, lower IT costs, and scale applications. Whether you're looking for virtual servers, serverless computing, or managed databases, AWS has a service for it.

Why AWS for DevOps?

  • Scalability & Elasticity: Easily scale resources up or down based on demand.

  • Automation: Rich APIs and tools (like the CLI) allow for automation of infrastructure and deployments.

  • Pay-as-you-go: Only pay for what you use, reducing upfront capital expenditure.

  • Global Infrastructure: Deploy applications closer to your users for lower latency.

  • Rich Service Ecosystem: Integrates well with common DevOps tools and practices.

2. Creating Your AWS Account

Getting started with AWS is straightforward.

  1. Go to the AWS Free Tier page.

  2. Click on "Create a Free Account."

  3. Follow the on-screen instructions. You'll need to provide an email address, password, contact information, and a valid credit card (it won't be charged for Free Tier usage within limits, but is required for verification and any usage beyond the free tier).

  4. You'll also need to verify your phone number.

Important Note on Free Tier: The AWS Free Tier allows you to explore and try out AWS services free of charge up to specified limits for 12 months (for many services) or always free (for some services). Be mindful of these limits to avoid unexpected charges.

3. IAM: Managing Access and Permissions

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

Best Practice: Never use your root account for daily tasks. Instead, create an IAM user with administrative privileges.

Creating an Admin IAM User for CLI Access (via AWS Console):

  1. Sign in to the AWS Management Console using your root account (the email you used to sign up).

  2. Navigate to the IAM service.

  3. In the navigation pane, click Users, then click Add users.

  4. User name: Enter a name, e.g., admin-cli-user.

  5. Select AWS credential type: Check Access key - Programmatic access. This will generate an Access Key ID and a Secret Access Key for CLI/API use.

  6. Click Next: Permissions.

  7. Select Attach existing policies directly.

  8. Search for and check the box next to AdministratorAccess. This policy grants full access to all AWS services and resources.

    • Security Note: For production, always follow the principle of least privilege. AdministratorAccess is used here for simplicity in setting up an initial admin user.
  9. Click Next: Tags (optional, you can skip).

  10. Click Next: Review.

  11. Click Create user.

  12. Crucial Step: On the success screen, you'll see the Access key ID and Secret access key. Click Download .csv to save these credentials. This is your only opportunity to view and download the secret access key. Store them securely!

    • Example Credentials (DO NOT USE THESE, THEY ARE EXAMPLES):

      • Access key ID: AceessKEYIDEXAMPLE

      • Secret access key: SecretEXAMPLEKEY

You'll use these credentials to configure the AWS CLI later.

4. Regions and Availability Zones (AZs)

  • Regions: AWS has data centers in various physical locations worldwide, known as Regions (e.g., us-east-1 for N. Virginia, eu-west-2 for London). When you launch resources, you choose a Region. This allows you to place your resources closer to your users or meet specific legal/compliance requirements.

  • Availability Zones (AZs): Each Region consists of multiple, isolated, and physically separate data centers called Availability Zones. AZs are connected with low-latency, high-throughput, and highly redundant networking. Designing your applications to run across multiple AZs provides fault tolerance and high availability. If one AZ fails, your application can continue running in another.

5. VPC, CIDR, and Subnets: Your Private Network in the Cloud

  • Virtual Private Cloud (VPC): A VPC is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.

  • CIDR (Classless Inter-Domain Routing): When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a CIDR block; for example, 10.0.0.0/16. This block represents the primary IP address range for your VPC.

  • Subnets: A subnet is a range of IP addresses in your VPC. You can launch AWS resources, like EC2 instances, into a specific subnet.

    • Public Subnets: Subnets whose traffic is routed to an internet gateway. Instances in public subnets can directly access the internet.

    • Private Subnets: Subnets that don't have a direct route to an internet gateway. Instances in private subnets require a NAT Gateway/Instance to access the internet.

You typically divide your VPC's CIDR block into smaller CIDR blocks for each subnet. For example, if your VPC CIDR is 10.0.0.0/16, you could create a public subnet 10.0.1.0/24 and a private subnet 10.0.2.0/24.

6. EC2: Virtual Servers in the Cloud

Amazon Elastic Compute Cloud (EC2) provides scalable computing capacity in the AWS Cloud. It's essentially virtual servers (instances) that you can rent.

Creating an EC2 Instance (Free Tier Example via AWS Console):

Let's launch a basic Amazon Linux 2 instance.

  1. Navigate to the EC2 service in the AWS Console.

  2. Ensure you are in your desired Region (e.g., N. Virginia us-east-1).

  3. Click Launch instances.

  4. Name and tags (Optional but good practice):

    • Name: my-devops-server
  5. Application and OS Images (Amazon Machine Image - AMI):

    • Search for Amazon Linux.

    • Select Amazon Linux 2 AMI (HVM) - SSD Volume Type (usually the first one, marked "Free tier eligible").

  6. Instance type:

    • Select t2.micro (marked "Free tier eligible").
  7. Key pair (login):

    • This is crucial for SSH access. Click Create new key pair.

    • Key pair name: my-ec2-key

    • Key pair type: RSA

    • Private key file format: .pem (for OpenSSH on Linux/macOS) or .ppk (for PuTTY on Windows). Choose .pem.

    • Click Create key pair. Your browser will download my-ec2-key.pem. Store this file securely; you'll need it to SSH into your instance.

  8. Network settings:

    • Click Edit.

    • VPC: Your default VPC is usually fine for this example.

    • Subnet: Choose any subnet (or "No preference" to let AWS pick one in an AZ).

    • Auto-assign public IP: Ensure it's Enable.

    • Firewall (security groups):

      • Select Create security group.

      • Security group name: my-devops-sg

      • Description: Security group for DevOps server

      • Inbound security groups rules:

        • By default, it might have an SSH rule. If not, click Add security group rule.

          • Type: SSH

          • Protocol: TCP

          • Port range: 22

          • Source type: My IP (This is more secure, it will auto-fill your current public IP. If your IP changes, you'll need to update this rule). For broader access (less secure), you can choose Anywhere (0.0.0.0/0).

        • Click Add security group rule again for our application:

          • Type: Custom TCP

          • Protocol: TCP

          • Port range: 8080 (since our Java app will run on this port)

          • Source type: Anywhere (0.0.0.0/0) (so anyone can access the web app)

  9. Configure storage:

    • Default (e.g., 8 GiB gp2) is fine and within the Free Tier.
  10. Advanced details:

    • Leave defaults for now.
  11. Summary: Review your settings on the right.

  12. Click Launch instance.

It will take a few minutes for the instance to launch and pass status checks. Once Status Checks shows 2/2 checks passed, you can connect to it.

Connecting to your EC2 Instance (Linux/macOS Example):

  1. Find your instance in the EC2 console, select it, and note its Public IPv4 address.

  2. Open your terminal.

  3. Navigate to the directory where you saved my-ec2-key.pem.

  4. Change permissions of the key file: chmod 400 my-ec2-key.pem

  5. Connect using SSH:

     ssh -i "my-ec2-key.pem" ec2-user@YOUR_INSTANCE_PUBLIC_IP
    

    Replace YOUR_INSTANCE_PUBLIC_IP with the actual IP address. The default username for Amazon Linux 2 is ec2-user.

You are now connected to your EC2 instance! For our Jenkins deployment, you'll need to install Docker and Docker Compose on this instance:

# On the EC2 instance
sudo yum update -y
sudo amazon-linux-extras install docker -y
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -a -G docker ec2-user # Add ec2-user to docker group to run docker without sudo
# For Docker Compose (check for latest version)
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# You might need to log out and log back in for group changes to take effect
# newgrp docker # Or log out and back in
docker --version
docker-compose --version

Part 2: AWS CLI - Your Command Line Superpower

The AWS Command Line Interface (AWS CLI) is an open-source tool that enables you to interact with AWS services using commands in your command-line shell.

1. Installing the AWS CLI

The installation process varies by OS. Refer to the official AWS documentation: Installing or updating the latest version of the AWS CLI

A common method for Linux/macOS:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version

2. Configuring the AWS CLI (User Profile)

Once installed, you need to configure it with the credentials of the admin-cli-user (or any IAM user with programmatic access) you created earlier.

Run aws configure:

aws configure

It will prompt you for:

  • AWS Access Key ID: [YOUR_ACCESS_KEY_ID_FROM_CSV]

  • AWS Secret Access Key: [YOUR_SECRET_ACCESS_KEY_FROM_CSV]

  • Default region name: [e.g., us-east-1] (Choose the region you primarily work in)

  • Default output format: [json] (or text, table)

This creates a default profile. You can also create named profiles, which is highly recommended for managing multiple accounts or roles:

aws configure --profile my-admin-profile
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json

To use a named profile, you append --profile my-admin-profile to your AWS CLI commands.

3. AWS CLI Examples: Managing IAM Users & Permissions

Let's use the CLI to create a new IAM user, a group, assign permissions, and generate access keys for this new user. This new user could be one with more restricted permissions, for example, one that Jenkins will use.

Assume you have your admin-cli-user's credentials configured (either as default or a named profile like my-admin-profile). We'll use --profile my-admin-profile in the examples. If you configured as default, you can omit this flag.

  1. Create a new IAM user:

     aws iam create-user --user-name jenkins-deployer --profile my-admin-profile
    

    Output (example):

     {
         "User": {
             "Path": "/",
             "UserName": "jenkins-deployer",
             "UserId": "AIDAEXAMPLEUSERID",
             "Arn": "arn:aws:iam::123456789012:user/jenkins-deployer",
             "CreateDate": "2023-10-27T10:00:00Z"
         }
     }
    
  2. Create an IAM group:

     aws iam create-group --group-name ec2-deployment-group --profile my-admin-profile
    

    Output (example):

     {
         "Group": {
             "Path": "/",
             "GroupName": "ec2-deployment-group",
             "GroupId": "AGPAEXAMPLEGROUPID",
             "Arn": "arn:aws:iam::123456789012:group/ec2-deployment-group",
             "CreateDate": "2023-10-27T10:01:00Z"
         }
     }
    
  3. Assign EC2 full access permission to the group: We'll use a managed policy AmazonEC2FullAccess. You can find ARNs for managed policies in the IAM console or documentation.

     aws iam attach-group-policy \
         --group-name ec2-deployment-group \
         --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess \
         --profile my-admin-profile
    

    (No output on success for this command) Note: For true least privilege, you'd create a custom policy with only the exact permissions needed for deployment (e.g., describe instances, manage specific tags, etc.), not full EC2 access.

  4. Add the user to the group:

     aws iam add-user-to-group \
         --user-name jenkins-deployer \
         --group-name ec2-deployment-group \
         --profile my-admin-profile
    

    (No output on success for this command)

  5. Create an access key for the jenkins-deployer user: This is what Jenkins (or another automated tool) would use.

     aws iam create-access-key --user-name jenkins-deployer --profile my-admin-profile
    

    Output (example):

     {
         "AccessKey": {
             "UserName": "jenkins-deployer",
             "AccessKeyId": "AKIANEWACCESSKEYIDEX",
             "Status": "Active",
             "SecretAccessKey": "THISIsTheSecretKeyForJenkinsUserEXAMPLE",
             "CreateDate": "2023-10-27T10:05:00Z"
         }
     }
    

    Store this AccessKeyId and SecretAccessKey securely! You would typically store these as credentials within Jenkins.

Now, you have a jenkins-deployer user that has permissions to manage EC2 resources, and you have an access key pair for this user.

Part 3: DevOps in Action - Deploying to EC2 with Jenkins

Now for the exciting part: using Jenkins to build a Java application, containerize it with Docker, and deploy it to the EC2 instance we created earlier.

Prerequisites:

  1. Jenkins Server: A running Jenkins instance.

  2. Plugins on Jenkins:

    • Pipeline (usually installed by default)

    • Git

    • Maven Integration

    • Docker Pipeline

    • SSH Agent

    • Credentials Binding

    • Shared SCM Library (if using Jenkins Shared Library as in the example)

  3. Tools Configured in Jenkins (Manage Jenkins -> Global Tool Configuration):

    • Maven: Add a Maven installation (e.g., name it Maven).

    • Git: Usually auto-detected.

    • Docker: Jenkins needs to be able to execute Docker commands (either Docker installed on the Jenkins agent/master, or configure a Docker cloud).

  4. Credentials in Jenkins (Manage Jenkins -> Credentials):

    • gitlab-credentials: Username/Password or Access Token for GitLab to pull code and push version bumps. (ID: gitlab-credentials)

    • ec2-server-key: The private key (my-ec2-key.pem content) for SSHing into your EC2 instance. (ID: ec2-server-key, Type: "SSH Username with private key", Username: ec2-user, Private Key: Enter directly).

    • (Optional: Docker Hub credentials if pushing to a private Docker Hub repo).

  5. EC2 Instance Ready:

    • The EC2 instance (my-devops-server) created earlier, with Docker and Docker Compose installed.

    • The public key corresponding to ec2-server-key (from my-ec2-key.pem) should be in ~/.ssh/authorized_keys on the EC2 instance for the ec2-user. This was done when you created the key pair and launched the instance with it.

    • Security group allowing SSH (port 22) from Jenkins server and HTTP (port 8080) from anywhere.

  6. Project Code: The Java application from https://gitlab.com/twn-devops-bootcamp/latest/09-aws/java-maven-app/-/tree/jenkins-jobs?ref_type=heads. This project should contain:

    • Jenkinsfile

    • pom.xml (for Maven)

    • Dockerfile

    • docker-compose.yaml

    • server-cmds.sh

The Jenkinsfile Explained

This Jenkinsfile defines a declarative pipeline to build, test (implicitly via build), package, containerize, and deploy the Java application.

#!/usr/bin/env groovy

// Use a Jenkins Shared Library for reusable functions like buildJar(), buildImage(), etc.
library identifier: 'jenkins-shared-library@master', retriever: modernSCM(
    [$class: 'GitSCMSource',
    remote: 'https://gitlab.com/twn-devops-bootcamp/latest/09-aws/jenkins-shared-library.git', // URL of the shared library
    credentialsID: 'gitlab-credentials' // Credentials to access the shared library repo
    ]
)

pipeline {
    agent any // Run on any available Jenkins agent

    tools {
        maven 'Maven' // Make the Maven tool named 'Maven' (configured in Global Tool Config) available
    }

    stages {
        stage('increment version') {
            steps {
                script { // Allows running Groovy script steps
                    echo 'incrementing app version...'
                    // Use Maven Build Helper plugin to parse current version and set a new incremental version
                    sh 'mvn build-helper:parse-version versions:set \
                        -DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} \
                        versions:commit'
                    // Read the updated version from pom.xml
                    def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
                    def version = matcher[0][1]
                    // Set an environment variable for the image name, incorporating version and build number
                    env.IMAGE_NAME = "$version-$BUILD_NUMBER" // e.g., 1.1.1-5
                }
            }
        }

        stage('build app') {
            steps {
                echo 'building application jar...'
                // buildJar() is a custom function from the shared library.
                // It likely runs 'mvn clean package' or similar.
                buildJar()
            }
        }

        stage('build image') {
            steps {
                script {
                    echo 'building the docker image...'
                    // buildImage(imageName) is a custom function from the shared library.
                    // It likely runs 'docker build -t imageName .'
                    buildImage(env.IMAGE_NAME)
                    // dockerLogin() is from shared library, logs into a Docker registry (e.g., Docker Hub).
                    // May require Docker Hub credentials configured in Jenkins.
                    dockerLogin()
                    // dockerPush(imageName) is from shared library, pushes the image to the registry.
                    dockerPush(env.IMAGE_NAME)
                }
            }
        }

        stage("deploy") {
            steps {
                script {
                    echo 'deploying docker image to EC2...'

                    // Command to be executed on the EC2 server
                    def shellCmd = "bash ./server-cmds.sh ${IMAGE_NAME}"
                    // EC2 instance details (replace with your EC2 instance's public IP or DNS)
                    def ec2Instance = "ec2-user@18.184.54.160" // <<-- IMPORTANT: REPLACE THIS IP

                    // Use sshagent to securely handle SSH keys for connecting to EC2
                    sshagent(['ec2-server-key']) { // 'ec2-server-key' is the ID of the SSH credential in Jenkins
                        // Copy deployment script to EC2
                        sh "scp -o StrictHostKeyChecking=no server-cmds.sh ${ec2Instance}:/home/ec2-user"
                        // Copy docker-compose file to EC2
                        sh "scp -o StrictHostKeyChecking=no docker-compose.yaml ${ec2Instance}:/home/ec2-user"
                        // Execute the deployment script on EC2
                        sh "ssh -o StrictHostKeyChecking=no ${ec2Instance} ${shellCmd}"
                    }
                }
            }
        }

        stage('commit version update'){ // Commit the pom.xml with the bumped version back to Git
            steps {
                script {
                    // Use GitLab credentials to authenticate git operations
                    withCredentials([usernamePassword(credentialsId: 'gitlab-credentials', passwordVariable: 'PASS', usernameVariable: 'USER')]){
                        sh 'git config --global user.email "jenkins@example.com"' // Configure git user
                        sh 'git config --global user.name "Jenkins CI"'
                        sh 'git remote set-url origin https://$USER:$PASS@gitlab.com/twn-devops-bootcamp/latest/09-AWS/java-maven-app.git'
                        sh 'git add pom.xml' // Stage only pom.xml (or 'git add .' if other files changed by build)
                        sh 'git commit -m "ci: version bump [skip ci]"' // [skip ci] to prevent build loop
                        sh 'git push origin HEAD:jenkins-jobs' // Push to jenkins-jobs branch
                    }
                }
            }
        }
    }
}

Key Points in Jenkinsfile:

  • Shared Library: Promotes DRY (Don't Repeat Yourself) by centralizing common pipeline steps.

  • Version Increment: Automatically bumps the patch version of the application.

  • Environment Variables: env.IMAGE_NAME is used to tag the Docker image uniquely.

  • sshagent: Securely handles SSH credentials for scp and ssh commands to the EC2 instance.

  • withCredentials: Securely injects GitLab credentials for git push.

  • EC2 Instance IP: def ec2Instance = "ec2-user@18.184.54.160" MUST BE REPLACED with your EC2 instance's actual public IP or DNS name (e.g., ec2-user@your-ec2-public-ip.compute-1.amazonaws.com).

docker-compose.yaml Explained

This file defines the services that make up your application.

version: '3.8' # Specifies the Docker Compose file format version
services:
  java-maven-app: # Defines a service named 'java-maven-app'
    image: ${IMAGE} # Uses an environment variable 'IMAGE' for the Docker image name.
                    # This will be set by server-cmds.sh using the IMAGE_NAME from Jenkins.
    ports:
      - "8080:8080" # Maps port 8080 on the host (EC2 instance) to port 8080 in the container.
  postgres: # Defines a 'postgres' database service
    image: postgres:15 # Uses the official PostgreSQL version 15 image
    ports:
      - "5432:5432" # Maps port 5432 on the host to port 5432 in the container.
    environment:
      - POSTGRES_PASSWORD=my-pwd # Sets the password for the default PostgreSQL user.
                                 # In a real app, this would connect to the java-maven-app.

Dockerfile Explained

This file contains instructions to build the Docker image for the Java application.

# Use Amazon Corretto 8 JRE on Alpine Linux as the base image - lightweight and official
FROM amazoncorretto:8-alpine3.17-jre

# Expose port 8080, which the Spring Boot application (typically) runs on
EXPOSE 8080

# Copy the built JAR file from the target directory (created by Maven) into the image
# The JAR name might need adjustment if your pom.xml produces a different name or version.
COPY ./target/java-maven-app-1.1.0-SNAPSHOT.jar /usr/app/
# It's good practice to make this more generic, e.g., COPY ./target/*.jar /usr/app/app.jar
# Then update ENTRYPOINT accordingly. For this example, the specific name is used.

# Set the working directory inside the container
WORKDIR /usr/app

# Command to run when the container starts
# This assumes the JAR file name is exactly 'java-maven-app-1.1.0-SNAPSHOT.jar'.
ENTRYPOINT ["java", "-jar", "java-maven-app-1.1.0-SNAPSHOT.jar"]

Note: The ENTRYPOINT JAR name java-maven-app-1.1.0-SNAPSHOT.jar is hardcoded. If your pom.xml version changes, or if the increment version stage in Jenkins changes it to something like 1.1.1.jar, this Dockerfile or the COPY command would need to be more dynamic or the Jenkinsfile would need to ensure the JAR is renamed to a consistent name before building the image.

A more robust Dockerfile might look like this:

FROM amazoncorretto:8-alpine3.17-jre
ARG JAR_FILE=target/*.jar # Argument that can be passed during build
EXPOSE 8080
COPY ${JAR_FILE} /usr/app/app.jar # Copy whatever JAR matches to app.jar
WORKDIR /usr/app
ENTRYPOINT ["java", "-jar", "app.jar"]

And the buildImage function in the shared library would then pass --build-arg JAR_FILE=target/java-maven-app-${version}-${BUILD_NUMBER}.jar (or similar) to docker build. For simplicity, we're sticking to the provided example.

server-cmds.sh Explained

This script is executed on the EC2 server to pull the new Docker image and restart the application using docker-compose.

#!/usr/bin/env bash

# Set the IMAGE environment variable to the first argument passed to the script
# This argument ($1) will be the IMAGE_NAME (e.g., 1.1.1-5) from the Jenkins pipeline.
export IMAGE=$1

# (Optional but good practice) Log in to Docker registry if image is private
# docker login -u <username> -p <password_or_token> <your-registry.com>

# (Optional but good practice) Pull the specific image tag to ensure it's the latest
# docker pull ${IMAGE}

# Stop and remove existing containers (if any) defined in docker-compose.yaml,
# then start new containers in detached mode (--detach) using the new image.
# Docker Compose will use the IMAGE environment variable set above for the 'java-maven-app' service.
docker-compose -f docker-compose.yaml up --detach

echo "success"

Setting up the Jenkins Job

  1. In Jenkins, click New Item.

  2. Enter an item name (e.g., java-maven-app-deploy).

  3. Select Pipeline and click OK.

  4. Configuration:

    • Description: (Optional) "Pipeline to build and deploy Java Maven app to EC2."

    • Pipeline section:

      • Definition: Select Pipeline script from SCM.

      • SCM: Select Git.

      • Repository URL: https://gitlab.com/twn-devops-bootcamp/latest/09-aws/java-maven-app.git

      • Credentials: Select your gitlab-credentials.

      • Branch Specifier (blank for 'any'): */jenkins-jobs (to use the jenkins-jobs branch where your Jenkinsfile is located).

      • Script Path: Jenkinsfile (this is the default and usually correct).

  5. Click Save.

Now, you can click Build Now on your Jenkins job. Jenkins will:

  1. Checkout the jenkins-jobs branch.

  2. Load the shared library.

  3. Execute the stages defined in your Jenkinsfile.

  4. If successful, your Java application (and PostgreSQL) will be running in Docker containers on your EC2 instance, accessible via http://YOUR_EC2_PUBLIC_IP:8080.


Conclusion and Next Steps

Congratulations! You've covered the fundamentals of AWS, including IAM, EC2, and VPCs, learned how to use the AWS CLI for automation, and walked through a complete CI/CD pipeline deploying a containerized application to EC2 using Jenkins.

Key Takeaways:

  • AWS provides powerful building blocks.

  • IAM is crucial for security – always use the principle of least privilege.

  • The AWS CLI is essential for automation and scripting.

  • Jenkins pipelines, Docker, and docker-compose streamline application deployment.

This is just the beginning. From here, you can explore:

  • More advanced IAM policies and roles.

  • Load Balancers (ELB) to distribute traffic.

  • Auto Scaling Groups to automatically adjust capacity.

  • AWS managed services like RDS (for databases), S3 (for storage), ECR (for Docker images).

  • Infrastructure as Code tools like Terraform or AWS CloudFormation.

  • Serverless deployments with AWS Lambda and API Gateway.

I hope this article helps you understand the basics of AWS and AWS-CLI. Feel free to reach out to me if you have any questions.

Summary

Welcome to our DevOps series on AWS! This article explores Amazon Web Services, the leading cloud platform, focusing on key concepts like EC2 instances, IAM for access management, and VPCs for networking. You'll learn to set up your AWS environment, configure the AWS CLI for automation, and create IAM roles. We'll then guide you through deploying a Java application on an EC2 instance using Jenkins, along with Docker and Docker Compose, in a practical CI/CD pipeline. This foundational overview paves the way for more advanced AWS capabilities and DevOps practices.

0
Subscribe to my newsletter

Read articles from Hamza Iqbal directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Hamza Iqbal
Hamza Iqbal

Hi, Hamza Iqbal here. I'm a MERN Stack developer with 3+ years of experience. I'm a tech enthusiast who love to learn new skills and read tech related news. Currently, I'm learning DevOps.