Terraform Project: Create a Multi-Environment Infrastructure

Harshit SahuHarshit Sahu
4 min read

This blog is your one step solution for Terraform for DevOps Engineers.

For Source Code: https://github.com/harshitsahu2311/Terraform-project

So, in this blog we are going to make a multi-environment architecture through terraform in AWS. In multi-environment architecture we are creating resources dedicated to developer, staging and production team.

Firstly, to get started with this project you should have terraform installed in your system, if not go through the given guide

For Linux, you can install by

wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform

Now to connect terraform with AWS, you should have AWS CLI installed in your system

For Linux:

sudo apt install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

After installing the AWS CLI, you have to configure it through AWS access key ID and secret text

aws configure

Now let’s start with the project

For this project we are going to create terraform module in which we will create a template of infrastructure and provision it through terraform.

Modules are the main way to package and reuse resource configurations with Terraform. Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory.

Create a directory in your system

mkdir my_app_infra_module

Go inside the directory

cd my_app_infra_module

Write the code for provisioning AWS S3 bucket

# bucket.tf
resource "aws_s3_bucket" "my_app_bucket" {
    bucket = "${var.my_env}-hars-app-bucket"
    tags = {
        Name = "${var.my_env}-hars-app-bucket"
    }
}

Write the code for provisioning DynamoDB table

# dynamo.tf
resource "aws_dynamodb_table" "my_app_table" {
    name = "${var.my_env}-hars-app-table"
    billing_mode = "PAY_PER_REQUEST"
    hash_key = "userID"
    attribute {
        name = "userID"
        type = "S"
    }
    tags = {
        Name = "${var.my_env}-hars-app-table"
    }
}

Write the code for provisioning EC2 Instance

# instance.tf
resource "aws_instance" "my_app_server" {
    count = var.instance_count
    ami = var.ami
    instance_type = var.instance_type
    tags = {
        Name = "${var.my_env}-hars-app-server"
    }
}

Allocate all variables in variables.tf file

# variables.tf
variable "my_env" {
    description = "The is environment for the infrastructure "
    type = string
}

variable "instance_type" {
    description = "value of the instance type"
    type = string
}

variable "ami" {
    description = "value of the ami"
    type = string
}

variable "instance_count" {
    description = "value of the count of instance"
    type = number
}

Get out of that folder through this command

cd ..

Create a file for installing the AWS provider and connecting it to terraform, also create a remote backend for state locking

# terraform.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.66.1"
    }
  }
  backend "s3" {
    bucket = "harshitstate2311"
    key = "terraform.tfstate"
    region = "us-east-1"
    dynamodb_table = "dynamo34table"
  }
}

Mention the provider in the provider.tf file

# provider.tf
provider "aws" {
    region = var.aws_region
}

Create variables for it also

# backend.tf
# Backend Variables
variable "state_bucket_name" {
    default = "hars-app-bucket"
}

variable "state_table_name" {
    default = "hars-state-table"
}

variable "aws_region" {
    default = "us-east-2"
}


# backend resources
resource "aws_dynamodb_table" "my_state_table" {
    name = var.state_table_name
    billing_mode = "PAY_PER_REQUEST"
    hash_key = "LockID"
    attribute {
        name = "LockID"
        type = "S"
    }
    tags = {
        Name = var.state_table_name
    }
}

resource "aws_s3_bucket" "my_state_bucket" {
    bucket = var.state_bucket_name
    tags = {
        Name = var.state_bucket_name
    }
}

Create main.tf

# main.tf

# dev 
module "dev-app" {
    source = "./my_app_infra_module"
    my_env = "dev"
    instance_type = "t2.micro"
    ami = "ami-0dee22c13ea7a9a67" 
    instance_count = 1
}

#prd
module "prd-app" {
    source = "./my_app_infra_module"
    my_env = "prd"
    instance_type = "t2.medium"
    ami = "ami-0dee22c13ea7a9a67" 
    instance_count = 3
}

#stg
module "stg-app" {
    source = "./my_app_infra_module"
    my_env = "stg"
    instance_type = "t2.small"
    ami = "ami-0dee22c13ea7a9a67" 
    instance_count = 2

}

Terraform init

terraform init

Terraform validate

terraform validate

Terraform plan

terraform plan

Terraform apply

terraform apply --auto-approve

After successfully completing the project, don’t forget to destroy infrastructure

terraform destroy --auto-approve
0
Subscribe to my newsletter

Read articles from Harshit Sahu directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Harshit Sahu
Harshit Sahu

Enthusiastic about DevOps tools like Docker, Kubernetes, Maven, Nagios, Chef, and Ansible and currently learning and gaining experience by doing some hands-on projects on these tools. Also, started learning about AWS and GCP (Cloud Computing Platforms).