Infrastructure Automation with Terraform Project
Table of contents
- Step 1: Create terraform.tf file
- Step 2: Create provider.tf file
- Step 3: Create a backend.tf file which is for remote backend
- Step 3: Create a main.tf file
- Step 4: Create a my_app_infra_module folder in the same directory.
- Step 5: create my_server.tf this same folder
- Step 6: create my_table.tf this same folder
- Step 7: Create variable.tf this same folder
- Step 8: After all this run these commands
- Conclusion
Hello everyone, I have made a project of terraform in which added terraform.tfstate in remote backend. Additionally, I defined environment-specific configurations, allowing for easy provisioning of infrastructure in multiple environments, such as development, staging, and production. This helped streamline the deployment process and maintain consistency across different environments. And also specifically, I created modules for AWS automations, including the creation of S3 buckets, DynamoDB tables, and EC2 instances. By encapsulating these resources into reusable modules, I achieved a more structured and scalable approach to managing infrastructure.
Please check my repository for the code click here
Step 1: Create terraform.tf file
add these codes in it
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.67.0"
}
}
backend "s3" {
bucket = "terraform-gurucharanshettigar-state-s3-bucket"
key = "terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-gurucharanshettigar-state-dynamoDB-table"
}
}
Step 2: Create provider.tf file
add these codes in it
provider "aws" {
region = var.region
}
Step 3: Create a backend.tf file which is for remote backend
add these codes in it
# backend variable
variable "region" {
default = "us-east-1"
}
variable "s3_state_bucket_name" {
default = "terraform-gurucharanshettigar-state-s3-bucket"
}
variable "dynamoDB_state_table_name" {
default = "terraform-gurucharanshettigar-state-dynamoDB-table"
}
# backend resources
resource "aws_dynamodb_table" "backend_state_s3_bucket" {
name = var.dynamoDB_state_table_name
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
Name = var.dynamoDB_state_table_name
}
}
resource "aws_s3_bucket" "backend_state_dynamoDB_table" {
bucket = var.s3_state_bucket_name
tags = {
Name = var.s3_state_bucket_name
}
}
Step 3: Create a main.tf file
add these codes in it
# dev
module "dev" {
source = "./my_app_infra_module"
my_env = "dev"
instance_type = "t2.micro"
ami = "ami-007855ac798b5175e"
}
# prd
module "prd" {
source = "./my_app_infra_module"
my_env = "prd"
instance_type = "t2.micro"
ami = "ami-007855ac798b5175e"
}
# stg
module "stg" {
source = "./my_app_infra_module"
my_env = "stg"
instance_type = "t2.micro"
ami = "ami-007855ac798b5175e"
}
here you can see we have defined three environments which is "dev", "stg", "prd"
Step 4: Create a my_app_infra_module folder in the same directory.
create a my_bucket.tf this folder and add thede code in it
resource "aws_s3_bucket" "my_app_s3_bucket" {
bucket = "${var.my_env}-gurucharan-s3-bucket"
tags = {
Name = "${var.my_env}-gurucharan-s3-bucket"
}
}
Step 5: create my_server.tf this same folder
add these code
resource "aws_instance" "my_app_instance" {
count = 2
ami = var.ami
instance_type = var.instance_type
tags = {
Name = "${var.my_env}-instance"
}
}
Step 6: create my_table.tf this same folder
add these code in it
resource "aws_dynamodb_table" "basic-dynamodb-table" {
name = "${var.my_env}-gurucharan-dynamoDB-table"
billing_mode = "PAY_PER_REQUEST"
hash_key = "UserID"
attribute {
name = "UserID"
type = "S"
}
tags = {
Name = "${var.my_env}-gurucharan-dynamoDB-table"
}
}
Step 7: Create variable.tf this same folder
add these code in it
variable "my_env" {
description = "This value of environment"
type = string
}
variable "instance_type" {
description = "This is value of instance type"
type = string
}
variable "ami" {
description = "This is value of ami"
type = string
}
Step 8: After all this run these commands
before running these commands install awscli in system
terraform init
terraform plan
terraform apply
here you can see we have successfully created all the buckets and table
here we have added the remote backend of terraform.tfstate
Here are the images in which we have created AWS s3 bucket, AWS dynamoDB and AWS EC2 instance by terraform
Conclusion
This Terraform project showcased ability to optimize infrastructure management and promote automation through the use of best practices. By implementing a remote backend for storing the terraform.tfstate
file, I ensured consistent state management and facilitated collaboration among team members. The inclusion of environment-specific configurations allowed for streamlined deployment across different environments.
Overall, this project highlighted my strong grasp of Terraform and its capabilities in managing cloud infrastructure. By incorporating remote state management, environment configurations, and modularization, I delivered an efficient and scalable solution for infrastructure deployment and management. I am confident that the skills and experiences gained from this project will prove valuable in future endeavors involving infrastructure automation and cloud management.
Thank you for reading this blog and if any queries or if any corrections to be done in this blog please let me know.
contact us in Linkedin ,Twitter or email-id gurucharanu716@gmail.com
Subscribe to my newsletter
Read articles from Gurucharan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Gurucharan
Gurucharan
Gurucharan DevOps Engineer | Continuous Integration/Continuous Deployment | Automation Enthusiast Welcome to my LinkedIn profile! I'm an aspiring DevOps engineer with a strong background in software development and system administration. My goal is to bridge the gap between development and operations, delivering efficient and reliable software solutions through seamless collaboration. With experience in continuous integration/continuous deployment (CI/CD) pipelines, I specialize in automating software delivery processes to enhance development efficiency and quality. I excel in fast-paced, dynamic environments, leveraging infrastructure as code (IaC) tools like Terraform and cloud platforms such as AWS, Azure, and GCP to create scalable and resilient architectures. My technical skill set includes proficiency in scripting languages (Python, Bash), containerization technologies (Docker, Kubernetes), and configuration management tools (Ansible, Chef). I possess a deep understanding of networking protocols and security principles, ensuring robust and secure environments. Beyond technical experience, I believe in collaboration, cross-functional teams, and continuous learning. I thrive on solving complex challenges and seek opportunities for growth and innovation. If you're looking for a passionate DevOps engineer to streamline software delivery, enhance system reliability, and drive organizational success, let's connect! I'm open to exciting opportunities and collaborations. Feel free to reach out via LinkedIn or email. Let's build the bridge between development and operations, enabling continuous improvement and innovation!