Cloud Resume Challenge: Part 2

Toyyib OliyideToyyib Oliyide
7 min read

The goal of this blog post is to describe how to use Terraform, an infrastructure-as-code tool to provision the infrastructure built in the Cloud Resume Challenge Part 1 which was to simply deploy a static website on an s3 bucket using Cloudfront as the CDN (Content distribution network) tool and connect it to a custom DNS name using Route53.

Terraform is an open-source infrastructure as code tool developed by Hashicorp, it uses Hashicorp Configuration Language (HCL) to safely and predictably provision, change and improve infrastructure, the good thing is it is cloud agnostic which means it could be used to manage multiple providers and handle cross-cloud dependencies. It lets you define cloud resources as human-readable configuration files that can be versioned, reused and shared

Requirements

  1. AWS account - A free tier account will serve. Follow this link to create one

  2. Terraform installed on a Linux machine

  3. Set up AWS CLI (Command Line Interface)

  4. A Domain name with an SSL certificate - This can be easily gotten on AWS, it costs about 13 dollars to register a domain name and the SSL certificate can be gotten on AWS for free. Follow this to get one

  5. A basic resume page with HTML and a bit of CSS design, nothing too serious.

  6. Create two folders; the first is going to contain the HTML and CSS files which are going to be uploaded into the s3 bucket to serve as the static content

  7. In the second folder which is going to contain the terraform configuration files, create three files, main.tf - this will contain all the code to provision the entire infrastructure, vars.tf - this will contain variables and s3-policy.json - this will contain our bucket policy which would be used to enable access to the objects in the bucket

Provisioning the infrastructure

Terraform uses what is called resource blocks to provision the infrastructure needed. Each resource block describes one or more infrastructure objects such as an s3 bucket, a compute instance and even DNS records.

First, we define the cloud provider to use, its version and the region we will be running the infrastructure in. In our case, it is AWS version 4.64.0, which is the latest at the point of making this post and the region is set to us-east-1.

#Provider block
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.64.0"
    }
  }
}
#Set the region
provider "aws" {
    region = "us-east-1"
}

We also need to set up variables that will be used in the main terraform configuration file. Below the domain name variable is set to resume.example.com with type "string"

variable "domain_name" {
  default = "resume.example.com"
  type = string
  description = "Domain name"
}

Creating and configuring the s3 bucket

Then we create the s3-bucket using the aws_s3_bucket resource block. Here we specify the name of the bucket ensuring the name matches our domain name. The next resource block aws_s3_object uploads the content of the frontend folder into the bucket. The for_each line is used to iterate over the content of the folder. The source argument provides the absolute path to the folder while the etag argument triggers an upload when the object is edited.

#Create the s3 bucket
resource "aws_s3_bucket" "resume" {
  bucket = "resume.example.com"

}
#Upload the files into the bucket
resource "aws_s3_object" "resume" {
    bucket = aws_s3_bucket.resume.id
    for_each = fileset("/vagrant/cloudresume/frontend/","*")
    key = each.value
    source = "/vagrant/cloudresume/frontend/${each.value}"
    content_type = "text/html"
    etag = filemd5("/vagrant/cloudresume/frontend/${each.value}")

}
#Disable pubic access block
resource "aws_s3_bucket_public_access_block" "resume" {
  bucket = aws_s3_bucket.resume.id

  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false 
  restrict_public_buckets = false 
}

resource "aws_s3_bucket_policy" "resume" {
  bucket = aws_s3_bucket.resume.id
  policy = file("s3-policy.json")

}
resource "aws_s3_bucket_website_configuration" "example" {
  bucket = aws_s3_bucket.resume.id

  index_document {
    suffix = "resume.html"
  }

  error_document {
    key = "error.html"
  }

}

Public access is enabled for the created bucket using the aws_s3_bucket_public_access_block resource, all four arguments are set to false so as to enable public access into the s3 bucket.

To enable static website hosting aws_s3_bucket_website_configuration resource is used where we then specify the index document and error document as arguments.

Finally, we need to copy the bucket policy below into the earlier created s3-policy.json file, this will enable public access to the individual objects in the bucket. We then attach the policy to the bucket using the aws_s3_bucket_policy resource block where we specify the bucket id and file as arguments. With all these, our bucket is fully set and configured to host the static website.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::resume.example.com/*"
        }
    ]
}

Setting up the CloudFront distribution

Amazon CloudFront is a web service that is used to serve static, dynamic content such as .html, .css, video files images and so on with low latency to users around the world. This content is delivered through a worldwide network of data centers called edge locations.

Before we set it up, we need to get some information from our AWS account, the arn of the AWS ACM certificate we got earlier and the hosted zone id of our domain name. We can easily get them both with Terraform by using the data resource block. To get the certificate, we would need to provide certain arguments like the domain name and the type of domain name, for instance, whether it is Amazon issued or not. Lastly, the s3 origin id is provisioned using the local resource block, this is so we can easily reuse it whenever we need it.

#Get the acm certificate issued to our domain name
data "aws_acm_certificate" "issued" {
  domain = "resume.example.com"
  types       = ["AMAZON_ISSUED"]
  most_recent = true
}

#Get the route53 id 
data "aws_route53_zone" "selected" {
  name         = "example.com"
}
#Get the s3 origin id
locals {
  s3_origin_id = "myS3Origin"
}

We can now go ahead to set up the Cloudfront distribution connected to the s3 bucket.

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name    = aws_s3_bucket.resume.bucket_regional_domain_name
    origin_id      = local.s3_origin_id
  }

  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "resume.html"

 # logging_config {
  #  include_cookies = false
  # bucket          = "mylogs.s3.amazonaws.com"
  # prefix          = "myprefix"
  #}

  aliases = [var.domain_name]

  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  # Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id

    forwarded_values {
      query_string = false
      headers      = ["Origin"]

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  price_class = "PriceClass_200"

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  viewer_certificate {
    cloudfront_default_certificate = true
    ssl_support_method = "sni-only"
    acm_certificate_arn = data.aws_acm_certificate.issued.arn
    minimum_protocol_version = "TLSv1"
  }
}

Pointing it to a custom domain name using Route53

Route53 is AWS's DNS service, it can be used to register domain names, create hosted zones and create different types of records amongst other functionalities. It is highly available, scalable and intelligent.

To provision the custom domain name, we will need to create an alias record in Route53 and point it to our CloudFront distribution. An alias record allows us to route traffic to selected AWS resources, one of which is a CloudFront distribution.

The resource block aws_route53_record is to create the alias record, the arguments that will be provided are the name which will be the cloud distribution domain name and the hosted zone id.

#Create a record in route 53
resource "aws_route53_record" "site-domain" {
  zone_id = data.aws_route53_zone.selected.id
  name = var.domain_name
  type = "A"

  alias {
    name   = aws_cloudfront_distribution.s3_distribution.domain_name
    zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id
    evaluate_target_health = true
  }
}

The whole configuration for the infrastructure is now fully set up, so now we need to run some terraform commands. Run "terraform validate" to ensure there are no errors with the configuration. To initialize the terraform working directory, the command "terraform init" is run, this downloads all the required modules and plugins, then the "terraform plan" command is run to give a rundown of all the changes that will be made, describing the infrastructure that will be created, modified or destroyed and then finally "terraform apply" command is run, this sets up the infrastructure. This might take a few minutes but in the end, the entire infrastructure should be up. The domain name should now be accessible over the internet.

The beauty of Terraform is that we can destroy the entire infrastructure just as easily as we provisioned them with the simple command Terraform destroy.

0
Subscribe to my newsletter

Read articles from Toyyib Oliyide directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Toyyib Oliyide
Toyyib Oliyide