Building a Secure and Scalable Django Blog on AWS: The Ultimate Guide

Mustafa GönenMustafa Gönen
19 min read

TL;DR

This project deploys a Django-based blog application on AWS using various services such as EC2, RDS, S3, DynamoDB, CloudFront, and Route 53. The end result is a scalable and secure web application where users can upload pictures and videos on their blog pages, which are stored on an S3 bucket and recorded on a DynamoDB table.

Introduction

In this blog post, we will walk through the steps to deploy a Django-based blog application on the AWS (Amazon Web Services) cloud infrastructure. This project encompasses a wide range of AWS services such as VPC (Virtual Private Cloud), EC2 (Elastic Compute Cloud), RDS (Relational Database Service), S3 (Simple Storage Service), DynamoDB, CloudFront, Certificate Manager, IAM (Identity and Access Management) and Route 53. The end result is a robust and scalable web application.

Project Description

The Blog Page Application deploys a web app using the Django Framework on AWS Cloud Infrastructure. This infrastructure includes an Application Load Balancer with an Auto Scaling Group of EC2 Instances and RDS on a defined VPC. Additionally, CloudFront and Route 53 manage traffic securely via SSL/TLS. Users can upload pictures and videos to their blog page, which are stored on an S3 Bucket. The object list of the S3 Bucket, containing movies and videos, is recorded on a DynamoDB table.

Project Skeleton

To successfully deploy the TechMust Blog Page Application on AWS infrastructure with the desired architecture, we will structure our project into several key components. This project skeleton will include:

1. Amazon Web Services (AWS)

  • Amazon Virtual Private Cloud (VPC): We will configure a VPC with specific characteristics to isolate our application.

  • Amazon Elastic Compute Cloud (EC2) Instances: These instances will host our Django web application, and we'll utilize Launch Templates to streamline the setup process.

  • Amazon Relational Database Service (RDS): We'll set up an RDS instance to store user registration data using MySQL.

  • Amazon Simple Storage Service (S3): S3 will serve as our storage solution for user-uploaded pictures and videos, and we will define two S3 buckets for regular use and failover.

  • AWS Certificate Manager: We will use this service to create SSL certificates for secure connections, both on Application Load Balancer (ALB) and Amazon CloudFront.

  • Amazon CloudFront: Configured as a cache server, CloudFront will efficiently manage content delivery from ALB.

  • Amazon Route 53: It will be responsible for secure and reliable routing of traffic, allowing us to publish the website and ensuring failover in case of issues.

  • Amazon DynamoDB: This NoSQL database will store an object list of S3 Bucket content, ensuring efficient data retrieval.

  • AWS Lambda: A Python 3.8 Lambda function will be implemented to write objects from S3 to the DynamoDB table.

  • AWS Identity and Access Management (IAM): We'll define IAM roles and policies to grant necessary permissions to EC2 instances, Lambda function, and other resources.

2. Configuration Components

  • Security Groups: We will create and configure security groups for our ALB, EC2 instances, and RDS, ensuring secure traffic flow.

  • NAT Instance or Bastion Host: Depending on your choice, we will set up the necessary components for secure access to private resources.

3. Project GitHub Repository

We will set up a project repository on GitHub to store the application code, infrastructure configurations, and other project-related files.

4. Developer Notes

We will follow the developer notes provided by the developer team to prepare the Django environment on EC2 instances, deploy the application, and configure RDS settings.

5. Requirements.txt

This file will include the required Python packages and dependencies for the Django application.

Setup

Step 1: Create dedicated VPC and whole components

  • VPC
- Create VPC 
  .create a vpc named "aws_capstone-VPC" 
  .CIDR blok is "90.90.0.0/16" 
  .no ipv6 CIDR block 
  .tenancy: default 

- select "aws_capstone-VPC VPC", 
  click Actions
  enable DNS hostnames for the "aws_capstone-VPC"
  • Subnets
##Create Subnets 
- Create a public subnet 
  .named "aws_capstone-public-subnet-1A" 
  .under the vpc aws_capstone-VPC 
  .in AZ "us-east-1a" with 90.90.10.0/24 

- Create a private subnet 
  .named "aws_capstone-private-subnet-1A" 
  .under the vpc "aws_capstone-VPC" 
  .in AZ us-east-1a with 90.90.11.0/24 

- Create a public subnet 
  .named aws_capstone-public-subnet-1B 
  .under the vpc aws_capstone-VPC 
  .in AZ us-east-1b with 90.90.20.0/24 

- Create a private subnet 
  .named aws_capstone-private-subnet-1B 
  .under the vpc aws_capstone-VPC 
  .in AZ us-east-1b with 90.90.21.0/24

## Set auto-assign IP up for public subnets 
   - Select each public subnets and, 
   - click Modify "auto-assign IP settings" and, 
   - select "Enable auto-assign public IPv4 address"
  • Internet Gateway
- Click Internet gateway section on left hand side 
  .create an internet gateway named "aws_capstone-IGW" and,
  .click create

- ATTACH the internet gateway "aws_capstone-IGW" to the newly created VPC "aws_capstone-VPC" 
  .Go to Internet Gateways tab and select newly created IGW and, 
  .click action ---> Attach to VPC ---> Select "aws_capstone-VPC" VPC
  • Route Table
- Go to route tables on left hand side 
  .We have already one route table as main route table 
  .Change it's name as aws_capstone-public-RT 

- Create a route table and, 
  .give a name as "aws_capstone-private-RT" 

- Add a rule to "aws_capstone-public-RT" 
  .in which destination 0.0.0.0/0 (any network, any host) 
  .to target the internet gateway "aws_capstone-IGW" 
  .so that allow access to the internet. 

- Select the private route table, 
  .come to the subnet association subsection and,
  .add private subnets to this route table. 
  .Similarly, we will do it for public route table and public subnets
  • Endpoint
- Go to the endpoint section on the left hand menu 

- select endpoint and click create endpoint 

- service name : "com.amazonaws.us-east-1.s3" 

- VPC : "aws_capstone-VPC" 

- Route Table : private route tables 

- Policy : Full Access 

- click Create

Step 2: Create Security Groups (ALB, EC2 , RDS, NAT)

1. ALB Security Group
Name            : aws_capstone_ALB_Sec_Group
Description     : ALB Security Group allows traffic HTTP and HTTPS ports from anywhere 
VPC             : AWS_Capstone_VPC
Inbound Rules
.HTTP(80)    ----> anywhere
.HTTPS (443) ----> anywhere

2. EC2 Security Groups
Name            : aws_capstone_EC2_Sec_Group
Description     : EC2 Security Groups only allows traffic coming from aws_capstone_ALB_Sec_Group Security Groups for HTTP and HTTPS ports. In addition, ssh port is allowed from anywhere
VPC             : AWS_Capstone_VPC
Inbound Rules
.HTTP(80)    ----> aws_capstone_ALB_Sec_Group
.HTTPS (443) ----> aws_capstone_ALB_Sec_Group
.ssh         ----> anywhere

3. RDS Security Groups
Name            : aws_capstone_RDS_Sec_Group
Description     : RDS Security Groups only allows traffic coming from aws_capstone_EC2_Sec_Group Security Groups for MYSQL/Aurora port. 
VPC             : AWS_Capstone_VPC
Inbound Rules
.MYSQL/Aurora(3306)  ----> aws_capstone_EC2_Sec_Group

4. NAT Instance Security Group
Name            : aws_capstone_NAT_Sec_Group
Description     : NAT Instance Security Group allows traffic HTTP and HTTPS and SSH ports from anywhere 
VPC             : AWS_Capstone_VPC
Inbound Rules
.HTTP(80)    ----> anywhere
.HTTPS (443) ----> anywhere
.SSH (22)    ----> anywhere

Step 3: Create RDS

  1. First, we create a subnet group for our custom VPC. Click subnet Groups on the left-hand menu and click create DB Subnet Group

     Name               : aws_capstone_RDS_Subnet_Group
     Description        : aws capstone RDS Subnet Group
     VPC                : aws_capstone_VPC
     Add Subnets
     Availability Zones : Select 2 AZ in aws_capstone_VPC
     Subnets            : Select 2 Private Subnets in these subnets
    
  2. Go to the RDS console and click create database button

     Choose a database creation method : Standart Create
     Engine Options  : Mysql
     Version         : 8.0.20
     Templates       : Free Tier
     Settings        : 
         - DB instance identifier : aws-capstone-RDS
         - Master username        : admin
         - Password               : TechMust1234 
     DB Instance Class            : Burstable classes (includes t classes) ---> db.t2.micro
     Storage                      : 20 GB and enable autoscaling(up to 40GB)
     Connectivity:
         VPC                      : aws_capstone_VPC
         Subnet Group             : aws_capstone_RDS_Subnet_Group
         Public Access            : No 
         VPC Security Groups      : Choose existing ---> aws_capstone_RDS_Sec_Group
         Availability Zone        : No preference
         Additional Configuration : Database port ---> 3306
     Database authentication ---> Password authentication
     Additional Configuration:
         - Initial Database Name  : database1
         - Backup ---> Enable automatic backups
         - Backup retention period ---> 7 days
         - Select Backup Window ---> Select 03:00 (am) Duration 1 hour
         - Maintance window : Select window    ---> 04:00(am) Duration:1 hour
     create instance
    

Step 4: Create two S3 Buckets and set one of these as a static website

Go to the S3 Console and let's create two buckets.

  1. Blog Website's S3 Bucket

     Bucket Name             : awscapstone<YOUR NAME>blog
     Region                  : N.Virginia
     Block all public access : Unchecked
    
     #Other Settings are keep them as are
     #create bucket
    
  2. S3 Bucket for a failover scenario

     - Click Create Bucket
    
     Bucket Name : www.<YOUR DNS NAME>
     Region      : N.Virginia
     Block all public access : Unchecked
     #Please keep other settings as are
     #create bucket
    
     - Selects created www.<YOUR DNS NAME> bucket 
       ---> Properties 
       ---> Static website hosting
    
     Static website hosting : Enable
     Hosting Type           : Host a static website
     Index document         : index.html
     #save changes
    
     - Select www.<YOUR DNS NAME> bucket 
       ---> select Upload and upload index.html and sorry.jpg files from given folder
       ---> Permissions ---> Grant public-read access 
       ---> Checked warning massage
    

Step 5: Copy files downloaded or cloned from alledevops/blog-page-app-django-on-aws repo on Github

Step 6: Prepare your Github repository

Create a private project repository on your Github and clone it on your local. Copy all files and folders which are downloaded from alledevops/blog-page-app-django-on-aws repo under this folder. Commit and push them on your_private_Repo on GitHub.

Step 7: Prepare the 'userdata' to be utilized in the Launch Template

#!/bin/bash
# Update the package lists for upgrades and new package installation
apt-get update -y
# Install git to clone the repository
apt-get install git -y
# Install Python 3.8
apt-get install python3.8 -y
# Change directory to the home directory
cd /home/ubuntu/
# Set the access token for private repository
TOKEN="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# Clone the private repository using the access token
git clone https://$TOKEN@<YOUR PRIVATE REPO URL>
# Change directory to the cloned repository
cd /home/ubuntu/<YOUR PRIVATE REPO NAME>
# Install pip for Python package installation
apt-get install python3-pip -y
# Install Python development files and MySQL client development files
apt-get install python3.8-dev default-libmysqlclient-dev -y
# Install libjpeg development files
apt-get install libjpeg-dev -y
# Install Python packages and dependencies from requirements.txt
pip3 install -r requirements.txt
# Change directory back to the repository
cd /home/ubuntu/<YOUR PRIVATE REPO NAME>
# Collect static files for deployment
python3 manage.py collectstatic --noinput
# Create database migrations based on the models
python3 manage.py makemigrations
# Apply the database migrations
python3 manage.py migrate
# Run the Django application on port 80
python3 manage.py runserver 0.0.0.0:80

Step 8: Configure RDS and S3 in Settings File and Push to GitHub

Write the RDS database endpoint and S3 Bucket name into the settings file given by the Developer team and push your application into your public repo on Github Please follow and apply the instructions below:

  1. Update the AWS_STORAGE_BUCKET_NAME and AWS_S3_REGION_NAME variables

    • Open the "/src/cblog/settings.py" file in your Django project.

    • Add the following lines to the file:

        AWS_STORAGE_BUCKET_NAME = 'awscapstones3<YOUR NAME>blog'
        AWS_S3_REGION_NAME = 'your_region_name'
      
  2. Update the Database Connection Variables

    • Open the "/src/cblog/settings.py" file in your Django project.

    • Add the following lines to the file, replacing the placeholders with your RDS database information:

        NAME = 'database1'
        HOST = 'your_database_endpoint'
        PORT = '3306'
      
  3. Configure the PASSWORD Variable

    • Create a new file named ".env" in the "/src/" directory of your Django project.

    • Add the following line to the ".env" file, replacing 'your_database_password' with your actual RDS database password:

        PASSWORD=your_database_password
      

After completing these steps, please check if this userdata is working or not by creating a new instance in a public subnet.

Step 9: Create NAT Instance in Public Subnet

To launch NAT instance, go to the EC2 console and click the create button.

  • Follow the instructions in AWS Documentation and create aws_capstone_nat_ami

  • Then, continue with the below configuration of NAT instance.


- Select NAT Instance `aws_capstone_nat_ami` from My AMIs section 
Instance Type : t2.micro
Configure Instance Details  
    - Network : aws_capstone_VPC
    - Subnet  : aws_capstone-public-subnet-1A (Please select one of your Public Subnets)
    - Other features keep them as are
Storage ---> Keep it as is
Tags: 
- Key    :Name     
- Value  :AWS Capstone NAT Instance

Configure Security Group
- Select an existing security group: aws_capstone_NAT_Sec_Group
- Review and select our own pem.key
- Click create

!!!IMPORTANT!!!

  • select the newly created NAT instance and enable the stop source/destination check

  • go to the private route table and write a rule

      Destination : 0.0.0.0/0
      Target      : instance ---> Select the NAT Instance
      #Save
    

Step 10: Create a Launch Template and IAM role for it

  • Go to the IAM role console click role on the right-hand menu then Create role.
Trusted entity  : EC2 as  ---> click Next:Permission
Policy          : AmazonS3FullAccess policy
Tags            : No tags
Role Name       : aws_capstone_EC2_S3_Full_Access
Description     : For EC2, S3 Full Access Role
  • To create a Launch Template, go to the EC2 console and select Launch Template on the left-hand menu. Tab the Create Launch Template button.

      Launch template name                : aws_capstone_launch_template
      Template version description        : Blog Web Page version 1
      Amazon machine image (AMI)          : Ubuntu 18.04
      Instance Type                       : t2.micro
      Key Pair                            : mykey.pem
      Network Platform                    : VPC
      Security Groups                     : aws_capstone_EC2_sec_group
      Storage (Volumes)                   : keep it as is
      Resource tags                       : Key: Name   Value: aws_capstone_web_server
      Advance Details:
          - IAM instance profile          : aws_capstone_EC2_S3_Full_Access
          - Termination protection        : Enable
          - User Data
      #!/bin/bash
      apt-get update -y
      apt-get install git -y
      apt-get install python3.8 -y
      cd /home/ubuntu/
      TOKEN="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
      git clone https://$TOKEN@<YOUR PRIVATE REPO URL>
      cd /home/ubuntu/<YOUR PRIVATE REPO NAME>
      apt-get install python3-pip -y
      apt-get install python3.8-dev default-libmysqlclient-dev -y
      apt-get install libjpeg-dev -y
      pip3 install -r requirements.txt
      cd /home/ubuntu/<YOUR PRIVATE REPO NAME>
      python3 manage.py collectstatic --noinput
      python3 manage.py makemigrations
      python3 manage.py migrate
      python3 manage.py runserver 0.0.0.0:80
    
  • Click Create

Step 11: Create SSL/TLS certification for secure connection

Go to the certification manager console and click Request a certificate button.

Select Request a public certificate, then request a certificate 
- Fully qualified domain name: *.<YOUR DNS NAME>  
- DNS validation 
- No tag 
- Review 
- click confirm and request button 
#Then it takes a while to be activated

Step 12: Create ALB and Target Group

Go to the Load Balancer section on the left-hand side menu of EC2 console. Click create Load Balancer button and select Application Load Balancer

Step 1 - Basic Configs
Name                    : awscapstoneALB
Schema                  : internet-facing
Availability Zones      : 
    - VPC               : aws_capstone_VPC
    - Availability zones: 
        1. aws_capstone-public-subnet-1A
        2. aws_capstone-public-subnet-1B
Step 2 - Configure Security Settings
Certificate type ---> Choose a certificate from ACM (recommended)
    - Certificate name    : "*.<YOUR DNS NAME>" certificate
    - Security policy     : keep it as is
Step 3 - Configure Security Groups : aws_capstone_ALB_Sec_group
Step 4 - Configure Listners and Routing
- Create target group firts:
    - Target group        : New target group
    - Name                : awscapstoneTargetGroup
    - Target Type         : Instance
    - Protocol            : HTTP
    - Port                : 80
    - Protocol version    : HTTP1
    - Health Check        :
      - Protocol          : HTTP
      - Path              : /
      - Port              : traffic port
      - Healthy threshold : 5
      - Unhealthy threshold : 2
      - Timeout           : 5
      - Interval          : 30
      - Success Code      : 200
- Listeners:  
    ----> HTTPS: Select "awscapstoneTargetGroup" for HTTPS
    ----> HTTP : Redirect traffic from HTTP to HTTPS:
        - Redirect to HTTPS 443
        - Original host, path, query
        - 301 - permanently moved 
Step 5 - Register Targets
- Without register any target click Next: Review and click create.

Step 13: Create Auto Scaling Group with Launch Template

Go to the Autoscaling Group on the left-hand side menu. Click create Autoscaling group.

  • Choose a launch template or configuration
Auto Scaling group name         : aws_capstone_ASG
Launch Template                 : aws_capstone_launch_template
  • Configure settings
Instance purchase options       : Adhere to launch template
Network                         :
    - VPC                       : aws-capstone-VPC
    - Subnets                   : Private 1A and Private 1B
  • Configure advanced options
- Load balancing                                : Attach to "awscapstoneALB" load balancer
- Choose from your load balancer target groups  : awscapstoneTargetGroup
- Health Checks
    - Health Check Type             : ELB
    - Health check grace period     : 300
  • Configure group size and scaling policies
Group size
    - Desired capacity  : 2
    - Minimum capacity  : 2
    - Maximum capacity  : 4
Scaling policies
    - Target tracking scaling policy
        - Scaling policy name       : Target Tracking Policy
        - Metric Type               : Average CPU utilization
        - Target value              : 70
  • Add notifications
Create new Notification
    - Notification1
        - Send a notification to    : aws-capstone-SNS
        - with these recipients     : your_email.com
        - event type                : select all

Step 14: Create Cloudfront in front of ALB

Go to the Cloudfront menu and click Create Distribution.

  • Origin Settings
Origin Domain Name          : aws-capstone-ALB-1947210493.us-east-2.elb.amazonaws.com
Origin Path                 : Leave empty (this means, define for root '/')
Protocol                    : Match Viewer
HTTP Port                   : 80
HTTPS                       : 443
Minimum Origin SSL Protocol : Keep it as is
Name                        : Keep it as is
Add custom header           : No header
Enable Origin Shield        : No
Additional settings         : Keep it as is
  • Default Cache Behavior Settings
Path pattern                                : Default (*)
Compress objects automatically              : Yes
Viewer Protocol Policy                      : Redirect HTTP to HTTPS
Allowed HTTP Methods                        : GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Cached HTTP Methods                         : Select OPTIONS
Cache key and origin requests
- Use legacy cache settings
  Headers     : Include the following headers
    Add Header
    - Accept
    - Accept-Charset
    - Accept-Datetime
    - Accept-Encoding
    - Accept-Language
    - Authorization
    - Cloudfront-Forwarded-Proto
    - Host
    - Origin
    - Referrer
Forward Cookies                         : All
Query String Forwarding and Caching     : All
Other stuff                             : Keep them as are
  • Distribution Settings
Price Class                             : Use all edge locations (best performance)
Alternate Domain Names                  : www.<Your_Domain_Name>
SSL Certificate                         : Custom SSL Certificate (example.com) ---> Select your certificate created before
Other stuff                             : Keep them as are

Step 15: Create Route 53 with Failover settings

Come to the Route53 console and select Health Checks on the left-hand menu.

  • Click Create Health Check and start to configure

      Name                : aws capstone health check
      What to monitor     : Endpoint
      Specify endpoint by : Domain Name
      Protocol            : HTTP
      Domain Name         : Write cloudfront domain name
      Port                : 80
      Path                : leave it blank
      Other stuff         : Keep them as are
    
  • Click Hosted Zones on the left-hand menu

  • Create <your_domain_name> hosted zone and click it

      Name: <your_domain_name>
      Type: Public Hosted Zone
      Tag : No tag
    
  • Click Create Record to create a Failover scenario

      Configure records
    
      Record name             : www.<YOUR DNS NAME>
      Record Type             : A - Routes traffic to an IPv4 address and some AWS resources
      TTL                     : 300
    
      ---> First we'll create a primary record for cloudfront
    
      Failover record to add to your DNS ---> Define failover record
    
      Value/Route traffic to  : Alias to cloudfront distribution
                                - Select created cloudfront DNS
      Failover record type    : Primary
      Health check            : aws capstone health check
      Record ID               : Cloudfront as Primary Record
      ----------------------------------------------------------------
    
      ---> Second we'll create secondary record for S3
    
      Failover another record to add to your DNS ---> Define failover record
    
      Value/Route traffic to  : Alias to S3 website endpoint
                                - Select Region
                                - Your created bucket name emerges ---> Select it
      Failover record type    : Secondary
      Health check            : No health check
      Record ID               : S3 Bucket for Secondary record type
    
  • click create records

Step 16: Create DynamoDB Table

Go to the DynamoDB table and click the Create Table button.

Name            : awscapstoneDynamo
Primary key     : id
Other Stuff     : Keep them as are
#click create

Step 17: Create Lambda Function

  • Before we create our Lambda function, we should create IAM role that we'll use for Lambda function. Go to the IAM console and select role on the left-hand menu, then create role button
Select Lambda as trusted entity ---> click Next:Permission
Choose: - LambdaS3fullaccess, 
        - Network Administrator
        - AWSLambdaVPCAccessExecutionRole
        - DynamoDBFullAccess
No tags
Role Name           : aws_capstone_lambda_Role
Role description    : This role give a permission to lambda to reach S3 and DynamoDB on custom VPC
  • Then, go to the Lambda Console and click the Create function

Function Name           : awscapsitonelambdafunction
Runtime                 : Python 3.8
Select exist. IAM role  : aws_capstone_lambda_Role

Advance Setting:
- Enable VPC: 
    - VPC               : aws-capstone-VPC
    - Subnets           : Select all subnets
    - Security Group    : Creaete "aws_capstone_lambda_sg"
                        ---> select all traffic (0.0.0.0/0) for inbound
                             and outbound rules
  • Select the awscapstonelambdafunction lambda Function and click add trigger on the Function Overview.

  • For defining a trigger for creating objects

Trigger configuration   : S3
Bucket                  : awscapstonec3<name>blog
Event type              : All object create events
Check the warning message and click add ---> sometimes it says overlapping situation. When it occurs, try refresh page and create a new trigger or remove the s3 event and recreate again. then again create a trigger for lambda function
  • For defining a trigger for deleting objects
Trigger configuration   : S3
Bucket                  : awscapstonec3<name>blog
Event type              : All object delete events
Check the warning message and click add ---> sometimes it says overlapping situation. When it occurs, try refresh page and create a new trigger or remove the s3 event and recreate again. then again create a trigger for lambda function
  • Go to the code part and select lambda_function.py ---> remove the default code and paste the code below. If you give DynamoDB a different name, please make sure to change it in the code.
import json
import boto3

def lambda_handler(event, context):
    s3 = boto3.client("s3")

    if event:
        print("Event: ", event)
        filename = str(event['Records'][0]['s3']['object']['key'])
        timestamp = str(event['Records'][0]['eventTime'])
        event_name = str(event['Records'][0]['eventName']).split(':')[0][6:]

        filename1 = filename.split('/')
        filename2 = filename1[-1]

        dynamo_db = boto3.resource('dynamodb')
        dynamoTable = dynamo_db.Table('awscapstoneDynamo')

        dynamoTable.put_item(Item = {
            'id': filename2,
            'timestamp': timestamp,
            'Event': event_name,
        })

    return "Lammda success"
  • Click deploy and all set. go to the website and add a new post with a photo, then control if their record is written on DynamoDB.

  • WE ALL SET!

  • Congratulations!! You have finished your AWS Capstone Project!

Outcome

Clean Up

It's essential to clean up the AWS resources you've created during your project to avoid incurring additional costs and maintain good resource management practices. Follow these steps to delete the resources you've created:

  1. Delete Auto Scaling Group and EC2 Instances

    • Go to the Amazon EC2 console.

    • Navigate to "Auto Scaling Groups" and select the "aws_capstone_ASG" group.

    • Click on the "Actions" button and select "Delete."

    • Confirm the termination of all associated EC2 instances when prompted.

  2. Delete Load Balancer

    • In the Amazon EC2 console, go to the "Load Balancers" section.

    • Select the "awscapstoneALB" load balancer.

    • Click on "Actions" and choose "Delete."

  3. Delete DynamoDB Table

    • Access the Amazon DynamoDB console.

    • Click on "Tables" on the left-hand menu.

    • Select the "awscapstoneDynamo" table.

    • Choose the "Delete table" option.

  4. Remove RDS Database

    • Navigate to the Amazon RDS console.

    • Click on the RDS instance named "aws-capstone-RDS."

    • In the "Instance actions" menu, select "Delete."

    • Confirm the deletion of the RDS instance and snapshots.

  5. Delete S3 Buckets

    • Access the Amazon S3 console.

    • Choose the "awscapstones3<your_name>blog" and "www.<your_domain>" buckets.

    • Select "Empty" and delete all objects in both buckets.

    • Now, choose the "Delete" option for each bucket.

  6. Delete CloudFront Distribution

    • Go to the Amazon CloudFront console.

    • Select the "awscapstoneALB" distribution.

    • Click on "Distribution Settings" and choose "Delete."

  7. Delete Route 53 Records

    • Visit the Amazon Route 53 console.

    • In your hosted zone, delete the Route 53 records associated with your DNS name and ALB.

  8. Remove Lambda Function

    • Open the AWS Lambda console.

    • Select the "awscapstonelambdafunction."

    • Click on "Delete" and confirm the deletion.

  9. Detach and Delete Internet Gateway

    • In the Amazon VPC console, navigate to "Internet Gateways."

    • Select "aws_capstone-IGW" and choose "Actions" > "Detach from VPC."

    • Once detached, select the "aws_capstone-IGW" again and click "Actions" > "Delete."

  10. Delete NAT Instance

    • Go to the Amazon EC2 console.

    • In the "Instances" section, select the AWS Capstone NAT Instance

    • Click on "Instance State" > "Terminate Instance."

  11. Delete Endpoints

    • Access the VPC console.

    • In the "Endpoints" section, select the created endpoints.

    • Choose "Actions" > "Delete Endpoints."

  12. Delete VPC

    • In the Amazon VPC console, choose "Your VPCs."

    • Select "aws_capstone-VPC" and click "Actions" > "Delete VPC."

    • Confirm the deletion when prompted

  13. Delete SSL/TLS Certificate

    • In the AWS Certificate Manager console, select the certificate created for SSL/TLS.

    • Click on the certificate, then choose "Actions" > "Delete certificate."

    • Confirm the deletion when prompted.

Conclusion

In conclusion, this capstone project has empowered you with a diverse skill set essential for success in cloud computing and web development. You've learned to construct VPC environments, manage databases, employ web programming skills, and apply serverless computing through AWS Lambda functions. The project has honed your infrastructure configuration abilities and proficiently introduced you to version control using Git and GitHub. With this comprehensive knowledge, you are well-prepared to tackle real-world challenges in cloud technology and web development, setting a strong foundation for your career in the ever-evolving tech industry.

References

2
Subscribe to my newsletter

Read articles from Mustafa Gönen directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mustafa Gönen
Mustafa Gönen

Devops Engineer