Building Multi-Environment AWS Infrastructure with Terraform: A Step-by-Step Guide

Shreyas LadheShreyas Ladhe
6 min read

In this guide, we’ll walk through how to set up a multi-environment AWS infrastructure using Terraform, including S3 buckets, DynamoDB tables, and EC2 instances. I’ll break down each step and explain key concepts, making this a comprehensive zero-to-hero guide. If you’re new to Terraform, don’t worry—I’ll introduce each concept as we go along. You can find the complete code for this project in my GitHub repository.

Project Structure

Let’s start with the project structure. Here’s how our Terraform project is organized:

terraform_project/
├── .terraform/
│   ├── lock.hcl
│   ├── modules/
│   │   └── modules.json
│   └── providers/
│       └── registry.terraform.io/
│           └── hashicorp/
│               └── aws/
│                   └── 5.65.0/
│                       ├── linux_amd64/
│                       │   ├── LICENSE.txt
│                       │   └── terraform-provider-aws_v5.65.0_x5
├── aws_modules/
│   ├── terra_buckets.tf
│   ├── terra_instances.tf
│   ├── terra_tables.tf
│   └── terra-variables.tf
├── terra-proj-1/
│   ├── main.tf
│   ├── dynamodb.tf
│   ├── ec2.tf
│   ├── outputs.tf
│   ├── s3.tf
│   ├── terraform.tf
│   ├── terraform.tfstate
│   ├── terraform.tfstate.backup
│   ├── variables.tf
│   ├── terra-test-key
│   └── terra-test-key.pub

This structure may look complex, but it’s designed to keep our project modular and maintainable. Let’s dive into each part of the setup.

Getting Started with Terraform

Terraform is an Infrastructure as Code (IaC) tool that lets you define and manage infrastructure in code form. In our project, we’re using Terraform to set up a multi-environment infrastructure on AWS, which includes S3 buckets, DynamoDB tables, and EC2 instances.

What’s in Each Directory?

  • .terraform/: Stores the necessary provider files (in this case, the AWS provider). Terraform downloads these files automatically, so you don’t need to modify them.

  • aws_modules/: This directory holds our custom modules, each of which defines resources like S3 buckets, EC2 instances, and DynamoDB tables.

  • terra-proj-1/: The main directory for our project’s configuration files. Here, we define variables, main configuration, and output settings.

main.tf: The Core Configuration

The main.tf file is the heart of our Terraform setup. It defines the AWS provider, which allows Terraform to communicate with AWS and manage its resources.

# main.tf

provider "aws" {
  region = var.region
}

module "s3" {
  source = "./aws_modules/terra_buckets.tf"
}

module "dynamodb" {
  source = "./aws_modules/terra_tables.tf"
}

module "ec2" {
  source = "./aws_modules/terra_instances.tf"
}

In this file, we define three modules: s3, dynamodb, and ec2. Modules are a key concept in Terraform, allowing us to group resources logically and reuse code across multiple environments. Let's go over each component.

Introducing Terraform Modules

Modules are reusable components in Terraform that help us avoid repeating code. By defining resources in modules, we can deploy them in multiple environments (e.g., development, staging, production) with minimal changes.

Here’s how our modules are structured in the aws_modules/ directory:

Let’s look at each of these in detail.

terra_buckets.tf: Creating S3 Buckets

The terra_buckets.tf file in the aws_modules folder defines an S3 bucket resource. S3 buckets provide scalable storage on AWS.

# terra_buckets.tf

resource "aws_s3_bucket" "example" {
  bucket = "example-bucket-${random_id.bucket_id.hex}"
}

output "bucket_name" {
  value = aws_s3_bucket.example.bucket
}

This code creates an S3 bucket with a unique name. The resource block is used to define any infrastructure component, in this case, an S3 bucket. The output block allows us to display specific values after the infrastructure is deployed.

terra_instances.tf: Provisioning EC2 Instances

EC2 (Elastic Compute Cloud) instances are virtual servers on AWS. In the terra_instances.tf file, we define an EC2 instance with specific configurations.

# terra_instances.tf

resource "aws_instance" "example" {
  ami           = var.ami_id
  instance_type = var.instance_type
}

output "instance_id" {
  value = aws_instance.example.id
}

Here, we use variables like ami_id and instance_type to make the configuration flexible. We’ll define these variables in the variables.tf file, allowing us to change values without modifying this code directly.

terra_tables.tf: Setting Up DynamoDB Tables

DynamoDB is a managed NoSQL database provided by AWS. In terra_tables.tf, we define a DynamoDB table.

hclCopy code# terra_tables.tf

resource "aws_dynamodb_table" "example" {
  name           = "example-table"
  hash_key       = "id"
  billing_mode   = "PAY_PER_REQUEST"

  attribute {
    name = "id"
    type = "S"
  }
}

output "table_name" {
  value = aws_dynamodb_table.example.name
}

This code sets up a DynamoDB table with on-demand billing. The attribute block defines the primary key for the table, making it essential for uniquely identifying records.

Defining Variables in variables.tf

Variables allow us to parameterize our code, making it more flexible. By defining variables in a separate file, we can change values easily without touching the main configuration files.

hclCopy code# variables.tf

variable "region" {
  description = "The AWS region to deploy resources in"
  type        = string
  default     = "us-east-1"
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t2.micro"
}

Here, we define variables like region and instance_type to be used throughout the project. Terraform will replace these placeholders with actual values when deploying resources.

Configuring terraform.tf: Providers and Versions

The terraform.tf file defines the provider and required versions, ensuring compatibility with AWS resources.

hclCopy code# terraform.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.65"
    }
  }
  required_version = ">= 1.0.0"
}

This file is essential for managing dependencies. The required_providers block tells Terraform which provider to use (in this case, AWS), and the required_version block ensures that we're using a compatible Terraform version.

Outputting Resource Details with outputs.tf

The outputs.tf file specifies the information we want Terraform to display after deployment.

hclCopy code# outputs.tf

output "s3_bucket_name" {
  value = module.s3.bucket_name
}

output "dynamodb_table_name" {
  value = module.dynamodb.table_name
}

output "ec2_instance_id" {
  value = module.ec2.instance_id
}

Here, we output the names of our S3 bucket, DynamoDB table, and EC2 instance ID. Outputs are especially helpful for debugging and referencing resource information in other parts of the infrastructure.

Deploying the Infrastructure

To deploy this setup, navigate to the terraform_project/terra-proj-1 directory and follow these steps:

  1. Initialize Terraform: This command downloads the necessary provider plugins.

     bashCopy codeterraform init
    
  2. Plan the Deployment: The plan command shows you what Terraform will create without actually provisioning anything.

     bashCopy codeterraform plan
    
  3. Apply the Configuration: This command provisions the infrastructure on AWS.

     bashCopy codeterraform apply
    
  4. Check Outputs: After a successful deployment, you’ll see the output values defined in outputs.tf.

Wrapping Up

In this guide, we explored how to set up a multi-environment AWS infrastructure using Terraform. We covered essential concepts like providers, resources, modules, and variables, and we looked at how to organize a Terraform project for clarity and reusability.

This setup is a powerful way to manage infrastructure as code, making it easy to deploy and manage environments in AWS. Terraform’s modular approach means you can easily scale this architecture as your requirements grow. Feel free to try this out, tweak configurations, and add new resources to explore more of what Terraform and AWS can offer.

Hopefully you guys enjoyed reading through this and you now have something more to learn about and implement to make your life much easier. For more awesome content, follow this blog page, also consider following me on LinkedIn. Want to know more about me!! follow me on Instagram!!

0
Subscribe to my newsletter

Read articles from Shreyas Ladhe directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Shreyas Ladhe
Shreyas Ladhe

I am Shreyas Ladhe a pre final year student, an avid cloud devops enthusiast pursuing my B Tech in Computer Science at Indian Institute of Information Technology Vadodara ICD. I love to learn how DevOps tools help automate complex and recurring tasks. I also love to share my knowledge and my project insights openly to promote the open source aspect of the DevOps community.