Terraform

Table of contents
- β What is IaC (Infrastructure as Code)?
- π Introduction to Terraform
- What is resource?
- HashiCorp Configuration Langauage (HCL)
- Workflows
- π§© Terraform Providers and the terraform init Command
- π Three Tiers of Terraform Providers
- Configuration Directory
- Multiple providers
- Input variables
- Variable Blocks
- Using variables in Terraform (Multiple ways)
- Resource Attributes Reference
- Resource Dependencies
- Output Variables
- π Terraform State (terraform.tfstate)
- π Purpose of Terraform State
- π Terraform State Considerations
- Terraform Commands
- π Mutable vs Immutable Infrastructure in IaC
- π Example
- β What Does Terraform Use?
- π Terraform Lifecycle Rules
- π Three Primary Lifecycle Meta-Arguments
- π Examples
- π Terraform Data Sources
- π Resource vs Data Source: Key Differences
- π’ What is count in Terraform?
- β 1. Using count with a fixed number
- β 2. Using count with a variable
- β 3. Using count = length(var.list)
- π Terraform for_each Meta-Argument
- β Use Case 1: Using for_each with a set (directly from variables.tf)
- β Use Case 2: Using for_each with a list, but convert it to a set
- π§ Summary: for_each vs count
- π Terraform Version Constraints
- π― Different Version Constraints
- π Getting Started with AWS
- π οΈ AWS with Terraform
- π€ AWS IAM (Identity and Access Management)
- π Programmatic Access
- π¦ Installing AWS CLI
- π Creating IAM Users on AWS with Terraform
- π Creating IAM Policies with Terraform
- πͺ£ Getting Started with AWS S3 (Simple Storage Service)
- π§Ύ Creating and Managing AWS S3 Buckets with Terraform
- π§Ύ Introduction to Amazon DynamoDB
- π οΈ DynamoDB with Terraform
- π¦ Terraform Remote State & Best Practices
- π¦ Remote State with S3 Backend (Best Practice)
- π¦ Terraform State Management
- π Introduction to AWS EC2 (Elastic Compute Cloud)
- π Provisioning AWS EC2 Web Server with Terraform
- π Terraform Provisioners
- π§ Example: AWS EC2 with remote-exec
- π§ Example: AWS EC2 with local-exec
- β οΈ Best Practices for Provisioners
- βοΈ Terraform Provisioner Behavior
- β Terraform Provisioners: Key Considerations
- π Final Notes
- 𧨠Terraform taint Command
- π οΈ Terraform Provisioner Failure Example
- π Terraform Debugging
- β Terraform Import
- β Terraform Modules
- πͺ Section 1: Using a Local Module
- π§© Section 2: Complex Modules with Reuse (Payroll App)
- π Section 3: Using Modules from Terraform Registry
- β Benefits of Modules
- π§ Terraform Functions & Conditional Logic β Explained with Examples

β What is IaC (Infrastructure as Code)?
Infrastructure as Code (IaC) is the practice of managing and provisioning IT infrastructure using code, instead of manual processes or interactive configuration tools.
With IaC, you define your infrastructure (servers, databases, networks, etc.) in machine-readable configuration files. This allows you to:
π§ Automate setup and configuration
π Repeat the same deployment consistently
π οΈ Version control infrastructure using Git
βοΈ Reduce errors caused by manual steps
π Scale easily across environments (dev, staging, prod)
π Example Tools that Use IaC:
Terraform (multi-platform, cloud-agnostic) - (Provisioning Tool)
AWS CloudFormation (for AWS only)
Ansible, Chef, Puppet (focus on configuration management)
Category | Purpose | Tools | Description |
Provisioning Tools | Create and manage infrastructure resources (VMs, networks, etc.) | Terraform, AWS CloudFormation, Pulumi | Define and provision infrastructure using declarative or imperative code |
Configuration Management | Install software and configure systems after provisioning | Ansible, Chef, Puppet, SaltStack | Ensure systems are configured consistently (e.g., install NGINX, apply security settings) |
Orchestration Tools | Coordinate multiple tasks across machines or systems | Kubernetes, Docker Swarm | Manage containerized applications and automate deployments |
Image Building Tools | Create machine or container images with pre-installed configurations | Packer, Dockerfile | Build reusable machine images (e.g., AMIs, Docker images) |
Secret Management Tools | Securely manage and access sensitive information | Vault, AWS Secrets Manager, Azure Key Vault | Manage credentials, API keys, and secrets for secure access |
Policy as Code Tools | Define and enforce security and compliance policies | OPA (Open Policy Agent), Sentinel | Ensure infrastructure follows organizational rules and compliance standards |
π Introduction to Terraform
Terraform is a powerful, open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It allows you to provision, manage, and destroy infrastructure across a wide range of environmentsβwhether on public clouds like AWS, Azure, and GCP, or private/on-premises platforms such as vSphere.
Terraform uses a declarative language called HCL (HashiCorp Configuration Language), where you define the desired state of your infrastructure in simple configuration files (with .tf
extension). Terraform then automatically figures out the steps needed to reach that state from the current oneβhandling all the provisioning logic for you.
It works in three key phases:
terraform init
β Initializes the project and configures providers (API connectors for each platform).terraform plan
β Generates a detailed execution plan showing what changes will be made.terraform apply
β Applies the necessary changes to reach the desired infrastructure state.
Terraform is resource-based, meaning it manages everything (VMs, databases, networks, etc.) as individual resources, taking care of their entire lifecycleβfrom creation and configuration to decommissioning.
What is resource?
Resource is a object that Terraform manages. It could be a file on the local host or a virtual machine, services like s3 bucket, IAM users
HashiCorp Configuration Langauage (HCL)
Syntax:
<block> <params> {
key1 = value1
key2 = value2
}
Examples:
Provisioning an AWS EC2 instance
For creating an AWS S3 bucket
Workflows
Write the configuration file
Run
terraform init
commandterraform plan
for review the execution planterraform apply
to apply the changes
terraform show
to show the resource we have created
Update and destroy resources in Terraform
terraform destroy
to delete the infrastructure completly.
- When we donβt want to produce output by the
terraform plan
andterraform apply
commands printed on the screen.
resource "local_sensitive_file" "games" {
filename = "/root/favorite-games"
content = "FIFA 21"
}
π§© Terraform Providers and the terraform init
Command
When you run the terraform init
command inside a directory that contains your Terraform configuration files, Terraform performs several important tasks:
π§ What terraform init
Does:
Task | Description |
π₯ Downloads Providers | Identifies the providers used in the configuration and downloads the required plugins for them. |
βοΈ Installs Plugins | Installs these provider plugins locally in a .terraform directory. |
π Prepares Backend | Initializes the backend configuration for storing state, if defined. |
π οΈ Plugin-Based Architecture
Terraform is built with a plugin-based architecture, meaning it can integrate with hundreds of infrastructure platforms by using external plugins (providers). This makes it highly modular and extensible.
π Terraform Registry
All major Terraform providers are:
Published and maintained by HashiCorp, partners, or the community
Available at:
π https://registry.terraform.io
π Three Tiers of Terraform Providers
Tier | Description | Examples | Maintainer | Support Level |
Tier 1 | Official providers maintained and supported directly by HashiCorp. | AWS, Azure, Google Cloud, Kubernetes | HashiCorp | High (regular updates, docs, support) |
Tier 2 | Providers maintained by partners or vendors, with some HashiCorp involvement. | Datadog, Cloudflare, VMware vSphere | Partner/Vendor + HC | Moderate (semi-official support) |
Tier 3 | Community or third-party maintained providers, often for niche platforms. | GitHub Actions, Netlify, UptimeRobot | Community | Low (may be outdated, limited support) |
When we run terraform init
it shows the version of the plugin that is being installed.
Configuration Directory
we can create configuration as many as we want in a single configuration ddirectory.
The common practive is to have one single configuration file with all the resource blocks required to provision the infrastructure. A single configuration file can have as many number of configuration clocks that we need.
Common naming convention used for such configuration file is to call it the main.tf
.
Other configuration file can be created within the directory are variables.tf
, outputs.tf
, provider.tf
.
Multiple providers
Terraform supports the use of multiple providers within the same configuration. to ilustrate this letβs use of another provider called random. This provider allows us to create random resources, such as random ID, a random integer, a random password, etc. Let us create a resources called random pet. This resource type will generate a random pet when applied.
resource "local_file" "pet" {
filename = "/root/pets.txt"
content = "We love pets!"
}
resource "random_pet" "my-pet" {
prefix = "Mrs"
separator = "."
length = "1"
}
Here, for random resource we have used three arguments
Field | Description |
prefix | Adds a custom prefix to the name (e.g., Mrs ). |
seperator | Specifies what character separates the prefix and name (typo β should be separator ). |
length | Number of words (pet name components) to generate (1 = single word). |
π What terraform init
Will Do:
Provider | Status |
local | β Already installed earlier β will be reused |
random | π Not yet installed β Terraform will download it |
π οΈ Terminal Output Example (Expected):
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Finding latest version of hashicorp/random...
- Installing hashicorp/random v3.5.1...
- Installed hashicorp/random v3.5.1 (signed by HashiCorp)
Terraform has been successfully initialized!
Input variables
resource "local_file" "pet" {
filename = "/root/pets.txt"
content = "We love pets!"
}
resource "random_pet" "my-pet" {
prefix = "Mrs"
separator = "."
length = "1"
}
This way of configuration file are hard-coded and not reusable, which resist the rule of using IAC. To improve our technique, we can use another file for varible like variables.tf, from which our configuation file will collect the defined variables.
Variable Blocks
Variable block in Terraform accepts three parameters.
defaults (Optional)
type (optional)
description (optional)
type
arguments are optional, but when used, it enforces the type of variable being used. If it is not specified in the varibale block, it is set to the type Any by default.
Other types of variable types rather string
, number
, bool
List β Can have duplicate variables
Map
Set β Similar to a List but canβt have duplicates values
Objects β Combinned of all (Complex data structure)
Tuples β Similar to List but here we can use elements of different variable types
Using variables in Terraform (Multiple ways)
- When
default parameter
in variable block are empty.
It will allows user to enter each variable in interactive mode.
- Command line flags (
-var βvariable_nameβ = βvalueβ
)
- Using environment variables with
TF_VAR_<name_of_the_declared_variable>="value"
- Variable definition file (When need to use lots of variables) - Custom variable
Variable Definition Precedence
When we use multiple ways to take value it take value by maintaining some rule
Resource Attributes Reference
We will learn how to link to resources by making use of resource attributes.
Initial State
After making resource attributes (referencing) - reference expression
Resource Dependencies
Here we are explicitly mentioning the local_file
resource is depends on random_pet
resource. And, so random_pet
resource will first created, and during deletion, it will deleted at the last. Here, we are not declaring what resource should created first, but Terraform will do that for us.
Here, we are using depends_on to create explicit dependency.
Output Variables
Here, in output block, mandatory argument for value is the reference expression.
terraform apply
can be used to see the output in the terminal. Once the resource has been created we can run terraform output
command to print the value of the output variables. This command will print all the output variabels defined in the current configuration directory. we can also use the output command to specificlly print the value of an existing output variable like below:
π Terraform State (terraform.tfstate)
πΉ What is Terraform State?
Terraform uses a state file (terraform.tfstate
) to track the current state of the infrastructure it manages. The content of this file is in json
format.
Terraform uses state file to map the resource configuration to the real world infrastructure.
π§ Why is it important?
Keeps a record of all deployed resources
Helps Terraform determine what needs to be added, changed, or destroyed
Enables incremental changes without affecting existing infrastructure
π Key Points:
Feature | Description |
terraform.tfstate | Created automatically after executing terraform apply at least once |
Location | Stored locally by default, but can be stored remotely (e.g., S3, GCS) |
Syncing | Must always be in sync with actual infrastructure state |
Sensitive Info | May contain secrets (use remote encrypted storage and restrict access) |
π Purpose of Terraform State
Terraform needs to keep track of what it has created so it can manage your infrastructure correctly. Thatβs why it uses a state file called terraform.tfstate
.
β Why is Terraform State Important?
Records What Exists
The state file stores details about all resources Terraform manages β like IDs, IP addresses, and names. This helps Terraform remember what it deployed.Detects Changes
When you runterraform plan
, Terraform compares the current state (interraform.tfstate
) with your configuration code to figure out what has changed.Prevents Re-Creation
Without state, Terraform wouldnβt know what resources already exist β and would try to recreate everything each time.Supports Collaboration
When using remote state (e.g., AWS S3), teams can safely share and work on infrastructure without conflicts.Stores Metadata
It keeps track of dependencies and links between resources to apply changes in the correct order.
π Terraform State Considerations
Managing Terraform state properly is crucial for safe and predictable infrastructure changes. Here are key considerations:
1. π§ State File Is Critical
The
terraform.tfstate
file contains the full record of Terraform-managed infrastructure.If it's lost or corrupted, Terraform cannot track or manage your resources properly.
2. π Sensitive Information
State files may contain sensitive data like passwords, API keys, or IP addresses.
Always encrypt state files and restrict access (especially in teams).
3. βοΈ Use Remote State for Teams
For team environments, use remote backends (e.g., AWS S3, Azure Blob, GCS) to:
Centralize the state file
Prevent accidental overwrites
Enable locking (e.g., DynamoDB for AWS)
4. β»οΈ Do Not Manually Edit
Avoid editing the state file directly unless absolutely necessary (and with backups).
Instead, use commands like
terraform state mv
,rm
, orimport
.
5. π Use State Locking
Prevents multiple users or pipelines from modifying the same state file at once.
Most remote backends (e.g., S3 + DynamoDB) support state locking.
6. π Store in Version Control?
β Do NOT commit
terraform.tfstate
or.terraform/
directories to version control.β You can track
.tfstate.backup
or version-controlled*.tf
files β but never the actual state file itself in Git.
7. π€ State Can Be Split
- For large projects, you can split infrastructure into multiple state files (using workspaces or modules) to improve manageability.
Terraform Commands
After making the configuration file:
terraform validate
to check syntax in configuration file used is correct. And the error (if exist) with hint to fix it.
terraform fmt
scans the configuration files in current working directory and formats the code into a canonical format.
terraform show
prints out the current state of the infrastructure as seen by terraform.
terraform show -json
prints the content in json format.
terraform providers
list of all providers used in configuration directory.
terraform providers mirror /root/terraform/new_local_file
to copy provider plugins needed for the current configuration to another directory.
terraform output
to print all the output in the configuration directory.
terraform output <pet-name>
output of a specific variable.
terraform apply -refresh-only
used to sync terraform with the real-world infrastructure. For example, if there are any changes made to a resource created by terraform outside its control such as manual update, the terraform refresh command will pick it up and update the state file. The reconcilation is useful to determine what action to take during the next apply. This command will not modify any infrastructure resource but it will modify the state file.
terraform graph
used to create a visual representation of the dependencies in a terraform configuration or an execution plan. The graph generated in a format called dot.
π Mutable vs Immutable Infrastructure in IaC
Aspect | Mutable Infrastructure | Immutable Infrastructure |
π Definition | You update the existing infrastructure in place | You replace existing infrastructure with new ones |
βοΈ Change Method | Modify (upgrade, patch) existing servers/resources | Destroy and recreate with updated configuration |
π οΈ Common Example | SSH into a server and run updates | Replace an old AMI with a new one during deployment |
π State Behavior | Infrastructure state is changed in-place | Infrastructure is discarded and rebuilt |
β οΈ Risks | Drift, inconsistency, configuration rot | More reliable, predictable, clean state |
π Use Case | Quick patches or dev environments | Production deployments, container-based apps |
π Example
π§ Mutable:
You update an EC2 instance in place:
resource "aws_instance" "web" {
ami = "ami-123"
instance_type = "t2.micro"
user_data = "apt update && apt install nginx"
}
Later, you just change the user_data
script and run terraform apply
. The instance stays the same, but its config changes β mutable behavior.
π§± Immutable:
You change the AMI to a new, pre-configured one:
resource "aws_instance" "web" {
ami = "ami-456" # new AMI with pre-installed nginx
instance_type = "t2.micro"
}
Terraform will destroy the old instance and create a new one β immutable behavior.
β What Does Terraform Use?
Terraform supports both, but it naturally leans toward immutable infrastructure.
π§ Why? Because:
Resources are declared declaratively
Changing a property often leads to recreating the resource (e.g., changing AMI, volume size)
This ensures a clean, predictable state
π Terraform Lifecycle Rules
In Terraform, the lifecycle
block inside a resource lets you customize how Terraform manages resource creation, update, and deletion. This helps handle complex infrastructure scenarios more safely and predictably.
π― Purpose:
To control the behavior of Terraform when a resource is:
Re-created
Changed
Deleted
π Three Primary Lifecycle Meta-Arguments
Meta-Argument | Description |
create_before_destroy | Ensures a new resource is created before destroying the old one. Useful to avoid downtime. |
prevent_destroy | Prevents a resource from being accidentally destroyed. Terraform will throw an error if a destroy is attempted. |
ignore_changes | Tells Terraform to ignore specific attributes even if they change outside Terraform (e.g., manually or by automation). |
π Examples
1οΈβ£ create_before_destroy
resource "aws_instance" "example" {
ami = "ami-123"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
}
β Ensures a new EC2 instance is created before destroying the old one β useful for zero-downtime deployments.
2οΈβ£ prevent_destroy
resource "aws_s3_bucket" "important" {
bucket = "my-critical-logs"
lifecycle {
prevent_destroy = true
}
}
β Prevents accidental deletion of important S3 buckets β Terraform will error out if a destroy is attempted.
3οΈβ£ ignore_changes
resource "aws_instance" "web" {
ami = "ami-abc"
instance_type = "t2.micro"
lifecycle {
ignore_changes = [ami]
}
}
β If someone changes the AMI manually in the cloud, Terraform wonβt try to revert it during future applies.
π Terraform Data Sources
π What is a Data Source?
A data source in Terraform allows you to fetch information from external sources or existing infrastructure without creating or modifying them.
β Think of data sources as read-only lookups.
π§ Why Use Data Sources?
To reference existing infrastructure (e.g., an existing AWS AMI, VPC, or S3 bucket).
To fetch dynamic values that are managed outside of Terraform.
To use outputs from one module in another without duplication.
π¦ Example:
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
}
π§Ύ Here, we use a data source to get the latest Ubuntu AMI, and then use that AMI to launch an EC2 instance.
π Resource vs Data Source: Key Differences
Aspect | Resource | Data Source |
Purpose | Creates, updates, or deletes infrastructure | Reads existing infrastructure or external data |
Behavior | Manages lifecycle of infrastructure | Read-only access |
Example | resource "aws_instance" | data "aws_ami" |
State File | Tracked in Terraform state | Referenced but not created or modified |
Use Case | Deploying EC2, S3, VMs, databases, etc. | Fetching AMI IDs, VPC info, secrets, etc. |
Can be destroyed? | β Yes | β No β only used for reading |
π’ What is count
in Terraform?
The count
meta-argument in Terraform allows you to create multiple instances of a resource using a single configuration block.
Itβs like a loop that helps with scaling resources easily.
β
1. Using count
with a fixed number
π§ main.tf
resource "local_file" "notes" {
count = 3
filename = "file_${count.index}.txt"
content = "This is file number ${count.index}"
}
This creates 3 files:
file_0.txt
file_1.txt
file_2.txt
β
2. Using count
with a variable
Weβll now control the number of resources using an input variable.
π File: variables.tf
variable "file_count" {
description = "How many files to create"
type = number
default = 2
}
π File: main.tf
resource "local_file" "notes" {
count = var.file_count
filename = "note_${count.index}.txt"
content = "This is note ${count.index}"
}
When you change the
file_count
value (e.g., viaterraform.tfvars
or CLI), it changes how many files get created.
β
3. Using count = length(var.list)
When you have a list of values, you can dynamically control the count using length()
.
π File: variables.tf
variable "file_names" {
description = "List of filenames"
type = list(string)
default = ["math", "science", "history"]
}
π File: main.tf
resource "local_file" "subjects" {
count = length(var.file_names)
filename = "${var.file_names[count.index]}.txt"
content = "This file is about ${var.file_names[count.index]}"
}
Terraform creates:
math.txt
science.txt
history.txt
π Terraform for_each
Meta-Argument
π What is for_each
?
for_each
is a Terraform meta-argument used to create multiple instances of a resource or module by looping over a set or map.
Unlike
count
,for_each
gives you more control and clarity, especially with named items.
β
Use Case 1: Using for_each
with a set (directly from variables.tf
)
π variables.tf
variable "file_names" {
description = "Set of file names"
type = set(string)
default = ["alpha", "beta", "gamma"]
}
π main.tf
resource "local_file" "my_files" {
for_each = var.file_names
filename = "${each.key}.txt"
content = "This is file named ${each.key}"
}
β Result:
Terraform will create:
alpha.txt
beta.txt
gamma.txt
β
Use Case 2: Using for_each
with a list, but convert it to a set
for_each
cannot be used directly with a list β it must be a set or map. So we convert the list to a set using toset()
.
π variables.tf
variable "topics_list" {
description = "List of topics"
type = list(string)
default = ["devops", "cloud", "terraform"]
}
π main.tf
resource "local_file" "topic_files" {
for_each = toset(var.topics_list)
filename = "${each.key}.md"
content = "This topic is about ${each.key}"
}
β Result:
Terraform will create:
π§ Summary: for_each
vs count
Feature | for_each | count |
Input Types | set or map | number or expression |
Resource Access | each.key , each.value (if map) | count.index |
Uniqueness | Named instances | Indexed instances |
Flexibility | β Better for identifying unique items | π« Less flexible with maps/sets |
π Terraform Version Constraints
π What Are Version Constraints?
Version constraints in Terraform allow you to control which version of a provider (like hashicorp/local
) Terraform installs. This avoids breaking changes from new major versions and ensures your infrastructure stays stable.
β Without Version Constraint
π main.tf
resource "local_file" "pet" {
filename = "/root/pet.txt"
content = "We love pets!"
}
π¦ When you run:
$ terraform init
Youβll see:
The following providers do not have any version constraints in configuration, so the latest version was installed.
To prevent automatic upgrades to new major versions, we recommend adding version constraints.
β Best Practice: Always define a version constraint to ensure consistent behavior.
β Adding Version Constraints
π main.tf
terraform {
required_providers {
local = {
source = "hashicorp/local"
version = "1.4.0"
}
}
}
resource "local_file" "pet" {
filename = "/root/pet.txt"
content = "We love pets!"
}
π¦ Output after terraform init
:
Installing hashicorp/local v1.4.0...
Terraform has been successfully initialized!
π― Different Version Constraints
Constraint Type | Example | Meaning |
Exact version | "1.4.0" | Only install version 1.4.0 |
Greater than | "> 1.1.0" | Any version newer than 1.1.0 |
Less than | "< 1.4.0" | Any version older than 1.4.0 |
Not equal to | "!= 2.0.0" | Exclude version 2.0.0 |
Range with exclusion | "> 1.2.0, < 2.0.0, != 1.4.0" | Between 1.2.0 and 2.0.0, except 1.4.0 |
Pessimistic (~> ) version | "~> 1.2" | Compatible patch versions, e.g., >= 1.2.0, < 2.0.0 |
Pessimistic with patch | "~> 1.2.0" | Patch updates only, e.g., >= 1.2.0, < 1.3.0 |
β
Example: Using ~>
(Pessimistic Constraint)
π main.tf
terraform {
required_providers {
local = {
source = "hashicorp/local"
version = "~> 1.2.0"
}
}
}
resource "local_file" "pet" {
filename = "/root/pet.txt"
content = "We love pets!"
}
π¦ Output after terraform init
:
Installing hashicorp/local v1.2.2...
Terraform has been successfully initialized!
βΉοΈ Installed version is within the allowed patch range (>= 1.2.0 and < 1.3.0).
π Getting Started with AWS
π₯ Why AWS?
AWS (Amazon Web Services) is a leader in cloud infrastructure services, recognized by Gartner for over 10 years.
π Source: AWS Named a Cloud Leader - Gartner MQ
β AWS Core Services:
Category | Examples |
Compute | EC2 (Elastic Compute Cloud) |
Storage | S3, EBS |
Databases | DynamoDB, RDS |
Analytics | Athena, Redshift |
Machine Learning | SageMaker |
IoT | IoT Core |
Networking | VPC, Route 53 |
π AWS Global Infrastructure (Popular Regions):
US: Ohio, Oregon, N. California, GovCloud (West/East)
Europe: London, Frankfurt, Ireland, Paris, Milan
Asia Pacific: Mumbai, Tokyo, Hong Kong, Sydney
Others: SΓ£o Paulo, Beijing, Canada (Central)
π οΈ AWS with Terraform
Terraform allows Infrastructure as Code (IaC), letting you automate AWS resources like:
AWS Service | Terraform Resource Example |
EC2 | aws_instance |
S3 | aws_s3_bucket |
DynamoDB | aws_dynamodb_table |
VPC | aws_vpc |
Route 53 | aws_route53_zone |
EBS | aws_ebs_volume |
π€ AWS IAM (Identity and Access Management)
Concepts:
Root User: Default AWS admin (avoid daily use)
IAM Users: Individual accounts for people/applications
IAM Groups: Group permissions (e.g., Developer Group)
Policies: JSON-based permissions (e.g.,
AdministratorAccess
,AmazonEC2FullAccess
)
IAM Policy Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
π§© Policy Name Examples:
AdministratorAccess
Billing
AmazonS3FullAccess
AmazonEC2FullAccess
π Programmatic Access
How to Configure AWS CLI:
$ aws configure
AWS Access Key ID: <YOUR_ACCESS_KEY>
AWS Secret Access Key: <YOUR_SECRET_KEY>
Default region name: us-west-2
Default output format: json
Config files:
~/.aws/config
~/.aws/credentials
Useful CLI Commands:
aws iam create-user --user-name lucy
aws s3api create-bucket --bucket my-bucket --region us-east-1
aws ec2 describe-instances
π CLI Reference: https://docs.aws.amazon.com/cli/latest/reference
π¦ Installing AWS CLI
Linux/Mac:
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
$ aws --version
Windows:
Download from: https://awscli.amazonaws.com/AWSCLIV2.msi
π Creating IAM Users on AWS with Terraform
π° Introduction
IAM (Identity and Access Management) in AWS allows you to manage access to AWS services and resources securely. With Terraform, you can automate IAM user creation, assign policies, and tag users consistently across environments.
π§± Terraform Block Structure
resource "aws_iam_user" "admin-user" {
name = "lucy"
tags = {
Description = "Technical Team Leader"
}
}
Block | Description |
aws_iam_user | Terraform resource type |
admin-user | Local name within your configuration |
name | IAM user name in AWS (e.g., lucy ) |
tags | Optional metadata for the user |
β
Full main.tf
Example with Provider
provider "aws" {
region = "us-west-2"
}
resource "aws_iam_user" "admin-user" {
name = "lucy"
tags = {
Description = "Technical Team Leader"
}
}
π Providing AWS Credentials (3 Methods)
- Inline in
provider
block (Not recommended for production):
provider "aws" {
region = "us-west-2"
access_key = "AKIAI44QH8DHBEXAMPLE"
secret_key = "je7MtGbClwBF/2tk/h3yCo8nvbEXAMPLEKEY"
}
- Using environment variables (Recommended):
$ export AWS_ACCESS_KEY_ID=AKIAI44QH8DHBEXAMPLE
$ export AWS_SECRET_ACCESS_KEY=je7MtGbClwBF/2tk/h3yCo8nvbEXAMPLEKEY
$ export AWS_REGION=us-west-2
- Using
~/.aws/credentials
and~/.aws/config
:
# ~/.aws/credentials
[default]
aws_access_key_id = AKIAI44QH8DHBEXAMPLE
aws_secret_access_key = je7MtGbClwBF/2tk/h3yCo8nvbEXAMPLEKEY
# ~/.aws/config
[default]
region = us-west-2
output = json
βοΈ Terraform Commands
Initialize the configuration
terraform init
Review the execution plan
terraform plan
Apply the configuration
terraform apply
π‘ Example Output After apply
aws_iam_user.admin-user: Creating...
aws_iam_user.admin-user: Creation complete after 1s [id=lucy]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
π Creating IAM Policies with Terraform
β Objective
In this guide, youβll learn how to:
Create an IAM user (
lucy
)Define a custom IAM policy (
AdminUsers
)Attach the policy to the user using
aws_iam_user_policy_attachment
Use Heredoc syntax (
<<EOF
) to define inline JSON policy
π§± Complete main.tf
Example
hclCopyEditprovider "aws" {
region = "us-west-2"
}
# 1. Create an IAM User
resource "aws_iam_user" "admin-user" {
name = "lucy"
tags = {
Description = "Technical Team Leader"
}
}
# 2. Define an IAM Policy using Heredoc Syntax
resource "aws_iam_policy" "adminUser" {
name = "AdminUsers"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
EOF
}
# 3. Attach the Policy to the User
resource "aws_iam_user_policy_attachment" "lucy-admin-access" {
user = aws_iam_user.admin-user.name
policy_arn = aws_iam_policy.adminUser.arn
}
π Policy Explained
This policy grants Administrator Access by:
Allowing all actions (
"Action": "*"
)On all resources (
"Resource": "*"
)Which is equivalent to the
AdministratorAccess
managed policy.
π§ Terraform Commands
- Initialize Terraform
terraform init
- Preview the Plan
terraform plan
- Apply the Configuration
terraform apply
β Sample Output (Simplified)
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
aws_iam_user.admin-user: Created [id=lucy]
aws_iam_policy.adminUser: Created [id=arn:aws:iam::123456789012:policy/AdminUsers]
aws_iam_user_policy_attachment.lucy-admin-access: Created [id=lucy-xyz123]
π Option: External Policy File
If you have a policy JSON file like admin-policy.json
:
Contents:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Updated Terraform:
resource "aws_iam_policy" "adminUser" {
name = "AdminUsers"
policy = file("admin-policy.json")
}
π§ Pro Tip: Use Version Constraints
Add a version block to prevent provider upgrades that might break your setup:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.6.0"
}
}
}
πͺ£ Getting Started with AWS S3 (Simple Storage Service)
π What is Amazon S3?
Amazon S3 (Simple Storage Service) is a scalable object storage service used to store and retrieve any amount of data at any time from anywhere on the web.
π S3 Storage Structure
S3 organizes data in a flat structure using:
Component | Description |
Bucket | Top-level container for storing objects |
Object | File stored in a bucket (e.g. .jpg , .mp4 ) |
Key | Unique identifier for each object (like a filepath) |
Value | The actual file/data (object content) |
Metadata | Data about the object (size, owner, timestamp, etc.) |
π Example Bucket: all-pets
(us-west-1)
Object (Key) | URL |
pets.json | https://all-pets.us-west-1.amazonaws.com/pets.json |
dog.jpg | https://all-pets.us-west-1.amazonaws.com/dog.jpg |
cat.mp4 | https://all-pets.us-west-1.amazonaws.com/cat.mp4 |
pictures/cat.jpg | https://all-pets.us-west-1.amazonaws.com/pictures/cat.jpg |
videos/dog.mp4 | https://all-pets.us-west-1.amazonaws.com/videos/dog.mp4 |
π Permissions and Access Control
There are two major ways to control access:
1. ACL (Access Control List)
Assigned to individual objects
Example:
dog.jpg
β Only Lucy can read
2. Bucket Policy
JSON-based permission rules at the bucket level.
Example: read-objects.json
β allow Lucy to GetObject
from bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::all-pets/*",
"Principal": {
"AWS": [
"arn:aws:iam::123456123457:user/Lucy"
]
}
}
]
}
π§ Important Notes
Feature | Details |
Bucket Name | Must be globally unique and DNS-compliant |
Max Object Size | Up to 5 TB |
URL Format | https://<bucket-name>.<region>.amazonaws.com/<key> |
Owner | User who created the bucket/object |
Last Modified | Tracks last time an object was updated |
π§Ύ Creating and Managing AWS S3 Buckets with Terraform
πͺ£ 1. Create an S3 Bucket
To provision an S3 bucket using Terraform:
resource "aws_s3_bucket" "finance" {
bucket = "finanace-21092020" # Must be globally unique
tags = {
Description = "Finance and Payroll"
}
}
β
After running terraform apply
, the output:
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Bucket ID: finanace-21092020
π 2. Upload a File to the Bucket
To upload a document (e.g. finance-2020.doc
) to the S3 bucket:
resource "aws_s3_bucket_object" "finance-2020" {
bucket = aws_s3_bucket.finance.id
key = "finance-2020.doc"
content = file("/root/finance/finance-2020.doc")
}
π‘ The key
represents the object's name (like a file path), and content
is loaded from your local filesystem.
π₯ 3. Attach IAM Group & Define Permissions
IAM Group (Data Source)
data "aws_iam_group" "finance-data" {
group_name = "finance-analysts"
}
π 4. Set Bucket Policy to Allow Group Access
Create Bucket Policy with <<EOF
Heredoc syntax:
resource "aws_s3_bucket_policy" "finance-policy" {
bucket = aws_s3_bucket.finance.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "arn:aws:s3:::${aws_s3_bucket.finance.id}/*",
"Principal": {
"AWS": [
"${data.aws_iam_group.finance-data.arn}"
]
}
}
]
}
EOF
}
π§ Tip: <<EOF
is heredoc syntax to write multiline JSON inside HCL.
π§ͺ 5. Apply Everything
Run:
terraform apply
You will see Terraform output showing resources like:
aws_s3_
bucket.finance
aws_s3_bucket_
object.finance
-2020
aws_s3_bucket_
policy.finance
-policy
β All created successfully!
π§Ύ Introduction to Amazon DynamoDB
πΉ What is DynamoDB?
Amazon DynamoDB is a:
πΈ Fully managed, serverless, NoSQL database service by AWS.
πΈ Designed for high performance with single-digit millisecond latency.
πΈ Provides horizontal scaling and multi-region replication out-of-the-box.
β Key Features
Feature | Description |
Fully Managed | No need to manage servers or infrastructure |
Highly Scalable | Handles millions of requests per second seamlessly |
Low Latency | Single-digit millisecond read/write |
NoSQL | Flexible schema, great for unstructured/semi-structured data |
Global Tables | Replicate data across multiple AWS regions |
Built-in Security | Encryption at rest, IAM access controls |
π Sample Table: cars
Let's consider a DynamoDB table storing car inventory:
π Sample Items
{
"Manufacturer": "Toyota",
"Make": "Corolla",
"Year": 2004,
"VIN": "4Y1SL65848Z411439"
}
{
"Manufacturer": "Honda",
"Make": "Civic",
"Year": 2017,
"VIN": "DY1SL65848Z411432"
}
{
"Manufacturer": "Dodge",
"Make": "Journey",
"Year": 2014,
"VIN": "SD1SL65848Z411443"
}
{
"Manufacturer": "Ford",
"Make": "F150",
"Year": 2020,
"VIN": "DH1SL65848Z41100"
}
π DynamoDB Table Design
Attribute | Description |
Manufacturer | Partition Key (Primary Identifier) |
Model | Sort Key (Optional) |
Year | Numeric attribute |
VIN | Unique Identifier (Secondary Index or attribute) |
β οΈ Note: DynamoDB requires a Primary Key, which can be:
A Partition Key (e.g.,
Manufacturer
), orA combination of Partition Key + Sort Key (e.g.,
Manufacturer + Model
)
π Use Cases
Real-time inventory tracking (e.g., car dealers)
Session state storage
IoT device data logging
Leaderboards and user profiles for games
Event logging and analytics
π Example Access Pattern
Want to find all cars by Honda:
aws dynamodb query \
--table-name cars \
--key-condition-expression "Manufacturer = :m" \
--expression-attribute-values '{":m":{"S":"Honda"}}'
π οΈ DynamoDB with Terraform
β Step 1: Create a DynamoDB Table
To create a table named cars
with a primary key (VIN
) using Terraform:
resource "aws_dynamodb_table" "cars" {
name = "cars"
hash_key = "VIN"
billing_mode = "PAY_PER_REQUEST" # On-demand pricing
attribute {
name = "VIN"
type = "S" # "S" = String
}
}
π PAY_PER_REQUEST
means you're charged per read/write β no need to define capacity units.
β Step 2: Add an Item to the Table
Use aws_dynamodb_table_item
to insert a car record:
resource "aws_dynamodb_table_item" "car_items" {
table_name = aws_dynamodb_table.cars.name
hash_key = aws_dynamodb_table.cars.hash_key
item = <<EOF
{
"Manufacturer": {"S": "Toyota"},
"Model": {"S": "Corolla"},
"Year": {"N": "2004"},
"VIN": {"S": "4Y1SL65848Z411439"}
}
EOF
}
π Use jsonencode()
or Heredoc syntax (<<EOF ... EOF
) to structure your JSON item.
π Sample Multiple Items
You can repeat the aws_dynamodb_table_item
resource block for each item, or modularize for dynamic item insertion (advanced):
{
"Manufacturer": {"S": "Honda"},
"Model": {"S": "Civic"},
"Year": {"N": "2017"},
"VIN": {"S": "DY1SL65848Z411432"}
}
π§ͺ Terraform Workflow
β Initialize:
terraform init
π Preview plan:
terraform plan
π Apply resources:
terraform apply
Confirm with
yes
.
β Output Example
aws_dynamodb_table.cars: Creation complete [id=cars]
aws_dynamodb_table_item.car-items: Creation complete [id=VIN=4Y1SL65848Z411439]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
π Notes
Use
"S"
for string,"N"
for number,"BOOL"
for boolean.Each
item
block must be valid JSON and match the defined schema.For production, use IAM roles and policies for permission control.
You can define additional attributes like
range_key
,TTL
, ortags
.
π¦ Terraform Remote State & Best Practices
π What is Terraform State?
terraform.tfstate
stores the current state of infrastructure.Maps real-world resources to your Terraform configuration.
Tracks metadata (e.g., resource IDs, IPs, volumes).
Enables performance optimization by caching state data.
β οΈ Why You Should NOT Store State Files in Version Control
β Never commit
terraform.tfstate
to Git or any VCS.
Reasons:
It can contain sensitive information (access keys, IPs, passwords).
Leads to merge conflicts when used by teams.
Insecure and not designed for collaborative access.
Risk of accidental changes and security breaches.
β Use remote backends (like AWS S3, Terraform Cloud) for secure, shared state management.
β Remote State: Real-World Usage
Remote Backends Examples:
AWS S3 (with optional DynamoDB for state locking)
Terraform Cloud
Google Cloud Storage
HashiCorp Consul
π State Locking
When using remote backends, Terraform can lock the state file to prevent concurrent writes.
Example Error:
Error: Error locking state: Error acquiring the state lock: resource temporarily unavailable
π Why Locking Matters:
Prevents simultaneous state updates from multiple users.
Ensures infrastructure consistency.
Uses DynamoDB (in AWS) or similar to track active locks.
π§ͺ Example: Remote State with AWS S3
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "env/dev/terraform.tfstate"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-locks" # For state locking
}
}
π Benefits of Remote State
Feature | Local State | Remote State |
Collaboration | β | β |
Security | β | β (Encrypted) |
State Locking | β | β |
Backup & Recovery | β | β |
Automation Friendly | β | β (CI/CD Ready) |
π Common Files
main.tf
β Infrastructure definitionsterraform.tfstate
β Auto-generated state file (β οΈ donβt track it!)terraform.tfvars
β Variable values.terraform/
β Terraform cache folder
β Final Best Practices
βοΈ Use remote state for collaboration and automation.
β Never push
terraform.tfstate
to Git.βοΈ Enable state locking to avoid conflicts.
βοΈ Use backend encryption for added security.
π¦ Remote State with S3 Backend (Best Practice)
β Why Use Remote Backend?
Store
terraform.tfstate
securely in a central S3 bucket.Enable team collaboration without state file conflicts.
Support automatic state locking with DynamoDB.
Avoid checking
.tfstate
into version control.
π§Ύ File Structure
$ ls
main.tf # Resource definitions
terraform.tf # Backend config (S3 + DynamoDB)
π main.tf
β Resource Definition
resource "local_file" "pet" {
filename = "/root/pets.txt"
content = "We love pets!"
}
βοΈ terraform.tf
β Remote Backend Configuration
β οΈ Do not put this block inside
main.tf
. Keep it separate interraform.tf
.
terraform {
backend "s3" {
bucket = "kodekloud-terraform-state-bucket01"
key = "finance/terraform.tfstate"
region = "us-west-1"
dynamodb_table = "state-locking" # Enables state locking
}
}
π State Locking with DynamoDB
Prevents simultaneous modifications.
Table:
state-locking
Required when working with remote teams.
βΆοΈ Commands Workflow
Initialize backend
$ terraform init
- Prompts to copy existing local state β Enter
yes
.
- Prompts to copy existing local state β Enter
Remove local state
$ rm -rf terraform.tfstate
Apply configuration
$ terraform apply
Youβll see messages like:
Acquiring state lock... Releasing state lock...
π Warning: Never Track State Files in Version Control
β Do not commit
terraform.tfstate
,.terraform/
, or.terraform.lock.hcl
.
Use .gitignore
:
terraform.tfstate
.terraform/
terraform.tfstate.backup
π§ Summary
Feature | Local State | Remote State (S3) |
Team Collaboration | β | β |
State Locking | β | β (DynamoDB) |
Security | β | β (S3 encryption) |
Performance & Backup | β | β (Versioning support) |
π¦ Terraform State Management
Terraform maintains infrastructure state in a .tfstate
file. This file tracks the mapping between your Terraform configurations and real-world resources.
π§ Common State Subcommands
Command | Purpose |
state list | Lists all resources tracked in the state |
state show | Shows detailed attributes of a specific resource |
state mv | Renames or moves a resource in the state |
state pull | Retrieves the raw state data |
state rm | Removes a resource from the state |
π Examples
π 1. terraform state list
Lists all resources stored in the current state:
$ terraform state list
aws_dynamodb_table.cars
aws_s3_bucket.finance-2020922
π 2. terraform state show [resource]
Shows detailed info about a resource from the state:
$ terraform state show aws_s3_bucket.finance-2020922
π Output snippet:
bucket = "finance-2020922"
region = "us-west-1"
tags = {
"Description" = "Bucket to store Finance and Payroll Information"
}
π 3. terraform state mv
Moves or renames a resource inside the state (without recreating):
$ terraform state mv aws_dynamodb_table.state-locking aws_dynamodb_table.state-locking-db
β Output:
Successfully moved 1 object(s).
π€ 4. terraform state pull
Fetches the entire raw state in JSON format:
$ terraform state pull | jq '.resources[] | select(.name=="state-locking-db") | .instances[].attributes.hash_key'
β Output:
"LockID"
β 5. terraform state rm
Removes a resource from the state (but not from AWS):
$ terraform state rm aws_s3_bucket.finance-2020922
β οΈ This will orphan the real-world resource unless you terraform import
it again.
π Important: Do NOT Edit .tfstate
Manually
Always use the Terraform CLI to safely view or modify state. Editing the state file directly can corrupt it and cause resource drift.
π Introduction to AWS EC2 (Elastic Compute Cloud)
AWS EC2 provides resizable compute capacity in the cloud. It allows you to launch virtual machines (instances) with various configurations of CPU, memory, storage, and networking.
πΌοΈ Amazon Machine Images (AMIs)
An AMI is a pre-configured template for your EC2 instance including the OS and software.
OS / Platform | AMI ID (Example) |
Amazon Linux 2 | ami-0c2f25c1f66a1ff4d |
RHEL 8 (Web Server) | ami-04312317b9c8c4b51 |
Ubuntu 20.04 (MySQL) | ami-0edab43b6fa892279 |
Windows Server (ASP.NET) | Windows Server 2019 AMI |
π‘ Instance Types
Different types for different use cases:
π General Purpose (T2 Series)
Instance | vCPU | Memory (GB) |
t2.nano | 1 | 0.5 |
t2.micro | 1 | 1 |
t2.small | 1 | 2 |
t2.medium | 2 | 4 |
t2.large | 2 | 8 |
t2.xlarge | 4 | 16 |
t2.2xlarge | 8 | 32 |
πΎ EBS Volume Types (Storage)
Elastic Block Store (EBS) is block-level storage attached to EC2 instances.
Name | Type | Description |
io1 | SSD | Business-critical apps |
io2 | SSD | Latency-sensitive transactions |
gp2 | SSD | General purpose |
st1 | HDD | Low-cost, frequently accessed |
sc1 | HDD | Lowest cost, infrequent access |
π More on EBS Volumes
π Access Methods
Linux/Ubuntu/RHEL: SSH using Key Pair (PEM file)
Windows: Connect via RDP using username/password
π§ User Data (Linux Example)
Use user data for automation during instance launch:
#!/bin/bash
sudo apt update
sudo apt install nginx -y
systemctl enable nginx
systemctl start nginx
This script installs and starts Nginx on an Ubuntu web server.
βοΈ Example Use Cases
OS | Workload Type |
Ubuntu | MySQL Database |
RHEL | Web Server (Apache/Nginx) |
Windows | ASP.NET Core Application |
π Summary
EC2 = Virtual Machine in the cloud
AMI = Prebuilt OS image
Instance Type = Choose based on workload
EBS Volume = Persistent storage
User Data = Automate instance configuration
π Provisioning AWS EC2 Web Server with Terraform
π¦ Purpose
To launch an Ubuntu 20.04 EC2 instance with:
NGINX pre-installed
SSH access enabled
SSH key pair for authentication
Security Group allowing inbound SSH
Output the public IP for remote login.
π Terraform Project Structure
project/
βββ main.tf
βββ provider.tf
βββ output.tf
βββ /root/.ssh/web.pub (your SSH public key)
π§ provider.tf
provider "aws" {
region = "us-west-1"
}
π§ main.tf
resource "aws_key_pair" "web" {
public_key = file("/root/.ssh/web.pub")
}
resource "aws_security_group" "ssh-access" {
name = "ssh-access"
description = "Allow SSH access from the Internet"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "webserver" {
ami = "ami-0edab43b6fa892279" # Ubuntu 20.04 LTS
instance_type = "t2.micro"
key_name = aws_key_pair.web.id
vpc_security_group_ids = [aws_security_group.ssh-access.id]
user_data = <<-EOF #USER-DATA WILL RUN ONLY FOR THE FIRST TIME
#!/bin/bash
sudo apt update
sudo apt install nginx -y
systemctl enable nginx
systemctl start nginx
EOF
tags = {
Name = "webserver"
Description = "An NGINX WebServer on Ubuntu"
}
}
π€ output.tf
output "publicip" {
value = aws_instance.webserver.public_ip
}
β Terraform Commands
$ terraform init # Initialize provider plugins
$ terraform plan # Review execution plan
$ terraform apply # Provision the infrastructure
π§ SSH Access
After successful apply:
$ ssh -i /root/.ssh/web ubuntu@<public_ip>
π§© Replace <public_ip>
with the output you got from Terraform.
π’ Validate NGINX
On the EC2 instance:
$ systemctl status nginx
You should see active (running)
status.
π Terraform Provisioners
Provisioners in Terraform allow you to execute scripts or commands either on the local machine (where Terraform is running) or on the remote resource (like an EC2 instance).
πΉ Types of Provisioners
Type | Purpose |
remote-exec | Run commands on the remote resource via SSH or WinRM |
local-exec | Run commands on the local machine (e.g., for automation/logging) |
π§ Example: AWS EC2 with remote-exec
π main.tf
resource "aws_key_pair" "web" {
public_key = file("/root/.ssh/web.pub")
}
resource "aws_security_group" "ssh-access" {
name = "ssh-access"
description = "Allow SSH access from the Internet"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "webserver" {
ami = "ami-0edab43b6fa892279"
instance_type = "t2.micro"
key_name = aws_key_pair.web.id
vpc_security_group_ids = [aws_security_group.ssh-access.id]
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo apt install nginx -y",
"sudo systemctl enable nginx",
"sudo systemctl start nginx"
]
connection {
type = "ssh"
user = "ubuntu"
private_key = file("/root/.ssh/web")
host = self.public_ip
}
}
tags = {
Name = "webserver"
}
}
β What it does: Once the EC2 instance is created, Terraform connects to it using SSH and installs NGINX.
π§ Example: AWS EC2 with local-exec
resource "aws_instance" "webserver" {
ami = "ami-0edab43b6fa892279"
instance_type = "t2.micro"
key_name = aws_key_pair.web.id
vpc_security_group_ids = [aws_security_group.ssh-access.id]
provisioner "local-exec" {
command = "echo ${self.public_ip} >> /tmp/ips.txt"
}
tags = {
Name = "webserver"
}
}
β
What it does: After the EC2 instance is created, the instance's public IP is written to a local file (/tmp/ips.txt
).
β οΈ Best Practices for Provisioners
π Avoid relying on provisioners in production. Use cloud-init, user_data, or configuration management tools like Ansible or Chef.
β Use
provisioners
for quick testing, POCs, or initial configuration.π‘οΈ Always ensure SSH key access and network rules (Security Groups) are configured before using
remote-exec
.
βοΈ Terraform Provisioner Behavior
Provisioners allow you to run scripts or commands during the resource lifecycle β at creation or during destruction.
π’ 1. Creation-Time Provisioner
These run immediately after the resource is created.
π main.tf
resource "aws_instance" "webserver" {
ami = "ami-0edab43b6fa892279"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo Instance ${self.public_ip} Created! > /tmp/instance_state.txt"
}
}
β Output
$ cat /tmp/instance_state.txt
Instance 3.96.136.157 Created!
π΄ 2. Destroy-Time Provisioner
These run just before the resource is destroyed by using when = destroy
.
π main.tf
resource "aws_instance" "webserver" {
ami = "ami-0edab43b6fa892279"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo Instance ${self.public_ip} Created! > /tmp/instance_state.txt"
}
provisioner "local-exec" {
when = destroy
command = "echo Instance ${self.public_ip} Destroyed! > /tmp/instance_state.txt"
}
}
β
Output after terraform destroy
$ cat /tmp/instance_state.txt
Instance 3.96.136.157 Destroyed!
β οΈ 3. Provisioner Failure Behavior
Use on_failure
to control what Terraform should do when a provisioner fails:
Option | Behavior |
fail | β Default β stops execution and fails the Terraform run |
continue | β Logs the error but continues execution |
π Example with fail
provisioner "local-exec" {
command = "echo ${self.public_ip} > /temp/pub_ip.txt" # invalid path
on_failure = "fail"
}
β This will stop the apply due to an invalid path (/temp
instead of /tmp
).
π Example with continue
provisioner "local-exec" {
command = "echo ${self.public_ip} > /temp/pub_ip.txt" # still invalid
on_failure = "continue"
}
β Execution continues even if the provisioner command fails.
β Terraform Provisioners: Key Considerations
Provisioners allow custom scripts/commands to run after resource creation or before destruction. Use them wisely and sparingly.
π§ Provisioner Types
Type | Runs On | Use Case Example |
local-exec | Local machine | Notify via Slack, copy data to a local file, etc. |
remote-exec | Remote resource (e.g. EC2) | Install packages, configure services (e.g. NGINX) |
π οΈ Example: remote-exec
resource "aws_instance" "webserver" {
ami = "ami-0edab43b6fa892279"
instance_type = "t2.micro"
tags = {
Name = "webserver"
Description = "An NGINX WebServer on Ubuntu"
}
provisioner "remote-exec" {
inline = ["echo $(hostname -i) >> /tmp/ips.txt"]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = file("/root/.ssh/web")
host = self.public_ip
}
}
π Remote-Exec Requirements
β
Ensure the following for successful remote-exec
:
π Security Group allows SSH (port 22) or WinRM (port 5985/5986)
ποΈ SSH Key Pair is available and injected
π EC2 instance has public IP (or private IP with VPN/Direct Connect)
π‘ Hostname resolution or direct IP accessible
β Considerations
π¦ Provisioners are not idempotent β if a provisioner fails or succeeds partially, re-running may produce unintended results.
β Provisioners do not show up in
terraform plan
β they are part of the apply phase only.π Avoid using provisioners for routine configurations. Prefer
user_data
, AMIs, or config management tools.
π Provisioner vs Cloud-Native Bootstrapping
Use Case | Recommended Method |
Simple package install | user_data , custom_data , metadata_startup_script |
Complex software/config setup | Pre-built AMI or Configuration Management (Ansible, Chef) |
One-time notification/cleanup | local-exec or remote-exec |
Launching scripts post-deploy | Use remote-exec (with caution) |
π Provider Specific Metadata Options
Cloud Provider | Resource Type | Metadata / Script Field |
AWS | aws_instance | user_data |
Azure | azurerm_virtual_machine | custom_data |
GCP | google_compute_instance | metadata_startup_script |
VMware vSphere | vsphere_virtual_machine | user_data |
π― Alternative to Provisioners: Custom AMI
Instead of scripting NGINX installation each time, build a custom AMI using tools like Packer:
nginx-build.json
{
"builders": [{ ... }],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo apt update",
"sudo apt install -y nginx"
]
}
]
}
β‘οΈ Use this AMI in Terraform:
ami = "ami-XYZ" # Your custom NGINX AMI
π Final Notes
π Use provisioners only when absolutely necessary
π‘οΈ For robust and secure infrastructure, prefer immutable infrastructure principles
π Use user_data for most bootstrapping tasks in cloud VMs
𧨠Terraform taint
Command
π What is taint
?
terraform taint
marks a resource for forced recreation during the next terraform apply
.
π Useful when the resource is still working but misconfigured or corrupted, and you want Terraform to destroy and recreate it.
β Syntax
terraform taint <resource_name>
π Example:
terraform taint aws_instance.webserver
π€ Result:
Terraform marks the resource as "tainted"
The next
terraform apply
will:destroy the current resource
then recreate it
π§Ή To Reverse It
terraform untaint <resource_name>
π Example:
terraform untaint aws_instance.webserver
π€ Result:
Terraform removes the taint flag
No changes will be made in next
apply
(if config is unchanged)
π Terraform Plan Output (after taint
)
# aws_instance.webserver is tainted, so must be replaced
-/+ resource "aws_instance" "webserver" {
This means Terraform will destroy the current instance and then create a new one.
π οΈ Terraform Provisioner Failure Example
provisioner "local-exec" {
command = "echo ${aws_instance.webserver.public_ip} > /temp/pub_ip.txt"
}
β This causes an error on Windows:
Error: The system cannot find the path specified.
β
Fix:
Use a valid directory path, such as:
command = "echo ${aws_instance.webserver.public_ip} > C:\\temp\\pub_ip.txt"
π Terraform Debugging
π Enable Debug Logs
export TF_LOG=TRACE
π Other log levels: TRACE
> DEBUG
> INFO
> WARN
> ERROR
TRACE
provides the most detailed information.
π Log to File
export TF_LOG_PATH=/tmp/terraform.log
β View logs:
head -10 /tmp/terraform.log
π§Ό Turn off logging:
unset TF_LOG
unset TF_LOG_PATH
β Terraform Import
Terraform allows importing existing resources (e.g., EC2, S3, Route53) so they can be managed via Terraform configuration and state.
π― Objective: Import an existing EC2 instance into Terraform
πͺ Step-by-Step Guide
β
Step 1: Use data
block to read existing resource (optional)
You used a data block like this:
data "aws_instance" "newserver" {
instance_id = "i-026e13be10d5326f7"
}
output "newserver" {
value = data.aws_instance.newserver.public_ip
}
Then:
$ terraform apply
π¦ Output:
Apply complete!
Outputs:
newserver = 15.223.1.176
βοΈ This is just to view an existing resource, but not manage it.
β Step 2: Attempt to import the resource (Fails without config)
You ran:
$ terraform import aws_instance.webserver-2 i-026e13be10d5326f7
π Error:
Error: resource address "aws_instance.webserver-2" does not exist in the configuration.
Before importing this resource, please create its configuration in the root module.
β
Step 3: Add the matching resource block to main.tf
You created:
resource "aws_instance" "webserver-2" {
# (resource arguments)
}
Just a basic skeleton β doesn't need full values yet.
β
Step 4: Run terraform import
again
$ terraform import aws_instance.webserver-2 i-026e13be10d5326f7
π¦ Output:
aws_instance.webserver-2: Importing from ID "i-026e13be10d5326f7"...
Import successful!
β
Step 5: Review in terraform.tfstate
After import, Terraform records the actual live configuration:
{
"type": "aws_instance",
"name": "webserver-2",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"attributes": {
"ami": "ami-0edab43b6fa892279",
"instance_type": "t2.micro",
"key_name": "ws",
"tags": {
"Name": "old-ec2"
},
"vpc_security_group_ids": ["sg-8064fdee"]
}
}
]
}
β
Step 6: Update main.tf
with accurate values
You refined your resource block to match the imported state:
resource "aws_instance" "webserver-2" {
ami = "ami-0edab43b6fa892279"
instance_type = "t2.micro"
key_name = "ws"
vpc_security_group_ids = ["sg-8064fdee"]
tags = {
Name = "old-ec2"
}
}
β
Step 7: Run terraform plan
$ terraform plan
βοΈ Output:
No changes. Infrastructure is up-to-date.
β This confirms that Terraform state and config are now in sync.
β Terraform Modules
π¦ What is a Module?
A module is a container for multiple Terraform resources grouped together. It helps with:
Reusability
Maintainability
Cleaner
main.tf
Easier collaboration and team management
πͺ Section 1: Using a Local Module
π§± 1. Create Reusable Module β /aws-instance
π Folder: /root/terraform-projects/aws-instance
resource "aws_instance" "webserver" {
ami = var.ami
instance_type = var.instance_type
key_name = var.key
}
variable "ami" {
type = string
default = "ami-0edab43b6fa892279"
description = "AMI ID"
}
variable "instance_type" {
type = string
default = "t2.micro"
}
variable "key" {
type = string
}
π§ 2. Consume Module in Root Project β /development
π Folder: /root/terraform-projects/development
module "dev-webserver" {
source = "../aws-instance"
ami = "ami-0edab43b6fa892279"
instance_type = "t2.micro"
key = "dev-key"
}
βΆοΈ 3. Initialize & Apply
$ terraform init
$ terraform apply
βοΈ Output: EC2 created from the module.
π§© Section 2: Complex Modules with Reuse (Payroll App)
π Reusable Module β /modules/payroll-app
π Folder: /root/terraform-projects/modules/payroll-app
Module Files:
β
Sample β app_
server.tf
resource "aws_instance" "app_server" {
ami = var.ami
instance_type = "t2.medium"
tags = {
Name = "${var.app_region}-app-server"
}
depends_on = [
aws_dynamodb_table.payroll_db,
aws_s3_bucket.payroll_data
]
}
β
Sample β s3_
bucket.tf
resource "aws_s3_bucket" "payroll_data" {
bucket = "${var.app_region}-${var.bucket}"
}
β
Sample β dynamodb_
table.tf
resource "aws_dynamodb_table" "payroll_db" {
name = "user_data"
billing_mode = "PAY_PER_REQUEST"
hash_key = "EmployeeID"
attribute {
name = "EmployeeID"
type = "N"
}
}
β
Sample β variables.tf
variable "app_region" {
type = string
}
variable "bucket" {
type = string
default = "flexit-payroll-alpha-22001c"
}
variable "ami" {
type = string
}
πΊπΈ US Payroll App
π /us-payroll-app
module "us_payroll" {
source = "../modules/payroll-app"
app_region = "us-east-1"
ami = "ami-24e140119877avm"
}
provider "aws" {
region = "us-east-1"
}
π οΈ Run:
$ terraform init
$ terraform apply
π¬π§ UK Payroll App
π /uk-payroll-app
module "uk_payroll" {
source = "../modules/payroll-app"
app_region = "eu-west-2"
ami = "ami-35e140119877avm"
}
provider "aws" {
region = "eu-west-2"
}
π οΈ Run:
$ terraform init
$ terraform apply
β Standardized config for multiple regions with shared logic.
π Section 3: Using Modules from Terraform Registry
π Example: SSH Security Group Module
module "security-group_ssh" {
source = "terraform-aws-modules/security-group/aws//modules/ssh"
version = "3.16.0"
vpc_id = "vpc-7d8d215"
ingress_cidr_blocks = [ "10.10.0.0/16" ]
name = "ssh-access"
}
π Get the module:
$ terraform get
β Benefits of Modules
Before Modules | With Modules |
Duplicate .tf files | Shared logic via module blocks |
Hard to maintain infra per region | Region-based folder (us/uk) w/ modules |
High risk on changes | Lower risk with modular changes |
Complex top-level main.tf | Simpler root configs |
π§ Terraform Functions & Conditional Logic β Explained with Examples
π 1. Why Functions?
Terraform functions allow you to transform, manipulate, or validate values inside configurations and templates. They increase reusability, reduce duplication, and improve logic-based deployments.
π’ 2. Numeric Functions
Function | Description | Example | Result |
max() | Returns max number | max(-1, 2, -10, 200, -250) | 200 |
min() | Returns min number | min(-1, 2, -10, 200, -250) | -250 |
ceil() | Rounds up to nearest integer | ceil(10.1) / ceil(10.9) | 11 |
floor() | Rounds down | floor(10.9) | 10 |
variable "num" {
type = set(number)
default = [250, 10, 11, 5]
description = "A set of numbers"
}
π€ 3. String Functions
Function | Description | Example | Result |
split() | Splits string to list | split(",", "a,b,c") | [a, b, c] |
join() | Joins list to string | join(",", ["a", "b", "c"]) | a,b,c |
lower() | Converts to lowercase | lower("ABC") | abc |
upper() | Converts to uppercase | upper("abc") | ABC |
title() | Capitalizes each word | title("abc,def") | Abc,Def |
substr() | Gets substring | substr("ami-xyz,ABC", 0, 7) | ami-xyz |
π§Ί 4. Collection Functions
List Example
variable "ami" {
type = list(string)
default = ["ami-xyz", "AMI-ABC", "ami-efg"]
}
Function | Description | Example | Result |
length() | Count items | length(var.ami) | 3 |
index() | Index of item | index(var.ami, "AMI-ABC") | 1 |
element() | Item at index | element(var.ami, 2) | ami-efg |
contains() | Check if item exists | contains(var.ami, "AMI-ABC") | true |
πΊοΈ 5. Map Functions
Map Example
variable "ami" {
type = map(string)
default = {
"us-east-1" = "ami-xyz",
"ca-central-1" = "ami-efg",
"ap-south-1" = "ami-ABC"
}
}
Function | Description | Example | Result |
keys() | List all map keys | keys(var.ami) | ["ap-south-1",...] |
values() | List all map values | values(var.ami) | ["ami-ABC",...] |
lookup() | Get value by key (w/ default) | lookup(var.ami, "us-west-2", "ami-pqr") | ami-pqr |
π 6. Type Conversion
Function | Purpose | Example | Result |
toset() | Convert list to set (remove dupes) | toset(["a", "a", "b"]) | ["a", "b"] |
tolist() | Convert other types to list | tolist(toset(["a", "b"])) | ["a", "b"] |
tonumber() | String to number | tonumber("5") | 5 |
π 7. Operators
Numeric, Equality, Comparison
> 1 + 2 // 3
> 8 == 8 // true
> 5 > 7 // false
> 4 < 5 // true
Logical
> 8 > 7 && 8 < 10 // true
> 8 > 10 || 8 < 10 // true
> ! true // false
β 8. Conditional Expressions
Syntax:
condition ? true_val : false_val
π Example β Password Generator
resource "random_password" "password-generator" {
length = var.length < 8 ? 8 : var.length
}
variable "length" {
type = number
description = "The length of the password"
}
Run Example:
$ terraform apply -var=length=5 -auto-approve
βοΈ Output: Will create password with length = 8
π§ͺ 9. terraform console
β Test Functions
$ terraform console
> length([1,2,3])
3
> split(",", "a,b,c")
["a", "b", "c"]
> var.length < 8 ? 8 : var.length
8
π§ Terraform Workspaces β Real World Usage
β What are Terraform Workspaces?
Terraform Workspaces allow you to use the same configuration to manage multiple environments (like dev, staging, prod) while keeping independent state files.
π Project Structure Overview
/root/terraform-projects/project/
βββ main.tf
βββ variables.tf
βββ terraform.tfstate.d/
β βββ ProjectA/
β β βββ terraform.tfstate
β βββ ProjectB/
β βββ terraform.tfstate
ποΈ Why Use Workspaces?
β Separate environments (e.g.,
ProjectA
,ProjectB
)β Prevent state file overwrites
β Simplifies environment isolation without duplicating code
π οΈ Steps to Use Terraform Workspaces
- Initialize Terraform Project
$ terraform init
- Create and Switch to a Workspace
$ terraform workspace new ProjectA
$ terraform workspace new ProjectB
- Check Current Workspace
$ terraform workspace show
- Switch Workspace
$ terraform workspace select ProjectA
- List Workspaces
$ terraform workspace list
π¦ Dynamic Configuration with Workspaces
π variables.tf
variable "ami" {
type = map(string)
default = {
"ProjectA" = "ami-0edab43b6fa892279",
"ProjectB" = "ami-0c2f25c1f66a1ff4d"
}
}
variable "instance_type" {
default = "t2.micro"
}
π main.tf
resource "aws_instance" "project" {
ami = lookup(var.ami, terraform.workspace)
instance_type = var.instance_type
tags = {
Name = terraform.workspace
}
}
π Verify in Console
$ terraform console
> terraform.workspace
"ProjectA"
> lookup(var.ami, terraform.workspace)
"ami-0edab43b6fa892279"
β Final Results
Each workspace manages its own state:
terraform.tfstate.d/
βββ ProjectA/ β EC2 with AMI-A
βββ ProjectB/ β EC2 with AMI-B
π Use Case
Perfect for managing multiple isolated deployments (per project, team, environment) without duplicating Terraform code.
Subscribe to my newsletter
Read articles from Arindam Baidya directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
