Interview Questions of Terraform

Table of contents
- 1. What is Terraform, and how is it different from other IaaC tools?
- 2. How do you call a main.tf module?
- 3. What exactly is Sentinel? Can you provide a few examples we can use for Sentinel policies?
- 4. You have a Terraform configuration file that defines an infrastructure deployment. However, multiple instances of the same resource need to be created. How would you modify the configuration file to achieve this?
- 5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?
- 6. The below command will destroy everything that is being created in the infrastructure. Tell us how you would save any particular resource while destroying the complete infrastructure.
- 7. Which module is used to store the .tfstate file in S3?
- 8. How do you manage sensitive data in Terraform, such as API keys or passwords?
- 9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?
- 10. Who maintains Terraform providers?
- 11. How can we export data from one module to another?
1. What is Terraform, and how is it different from other IaaC tools?
Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It enables users to define and provision infrastructure using a declarative configuration language called HashiCorp Configuration Language (HCL). Terraform is widely used for managing cloud services, on-premise infrastructure, and multi-cloud environments efficiently.
Key Features of Terraform:
Declarative Approach: Users define the desired state of infrastructure, and Terraform ensures that it matches.
Multi-Cloud Support: Works across AWS, Azure, GCP, Kubernetes, and other providers.
Immutable Infrastructure: Instead of modifying existing resources, it replaces them, reducing configuration drift.
State Management: Stores infrastructure state in a file (
terraform.tfstate
), which helps track changes.Dependency Management: Automatically determines the order of resource creation and modification.
How Terraform Differs from Other IaC Tools?
Feature | Terraform | Ansible | CloudFormation | Pulumi |
Language | HCL | YAML | JSON/YAML | Python, TypeScript, Go |
Approach | Declarative | Procedural & Declarative | Declarative | Imperative & Declarative |
Cloud-Agnostic | Yes | Yes | No (AWS-only) | Yes |
State Management | Yes | No | Yes | Yes |
Resource Handling | Immutable | Mutable | Immutable | Mutable & Immutable |
Provisioning Type | Primarily declarative | Configuration Management | AWS Infrastructure | Code-driven Infrastructure |
2. How do you call a main.tf module?
In Terraform, modules are reusable components that help organize infrastructure code. You can call a module in your main.tf
file using the module
block.
1. Folder Structure for a Module
Before calling a module, ensure you have a proper directory structure.
project/
│── main.tf # Calls the module
│── variables.tf # Input variables (optional)
│── outputs.tf # Output variables (optional)
│── modules/
│ ├── my_module/
│ │ ├── main.tf # Module definition
│ │ ├── variables.tf # Module variables
│ │ ├── outputs.tf # Module outputs
2. Define a Module (modules/my_module/
main.tf
)
A module is a Terraform configuration with its own main.tf
, variables.tf
, and outputs.tf
.
Example:
# modules/my_module/main.tf
resource "aws_instance" "example" {
ami = var.ami
instance_type = var.instance_type
}
Define its variables (modules/my_module/
variables.tf
):
variable "ami" {}
variable "instance_type" {}
Define outputs (modules/my_module/
outputs.tf
):
output "instance_id" {
value = aws_instance.example.id
}
3. Call the Module in main.tf
To use the module, reference it in the root main.tf
file:
module "ec2_instance" {
source = "./modules/my_module"
ami = "ami-0c55b159cbfafe1f0" # Example AMI ID
instance_type = "t2.micro"
}
4. Apply Terraform Commands
Run the following commands to deploy the module:
terraform init # Initialize Terraform
terraform apply # Apply configuration
3. What exactly is Sentinel? Can you provide a few examples we can use for Sentinel policies?
Sentinel is a policy-as-code framework developed by HashiCorp. It allows organizations to enforce governance, security, and compliance rules across their infrastructure, particularly within Terraform Enterprise and Terraform Cloud. Sentinel ensures that infrastructure changes meet predefined policies before being applied.
It is embedded within Terraform workflows and follows a declarative policy approach using its own Sentinel policy language.
Key Features of Sentinel
Fine-Grained Control: Define detailed policies to enforce security, compliance, and operational best practices.
Automated Policy Enforcement: Prevents misconfigurations before deployment.
Integration with Terraform: Works with Terraform Cloud & Enterprise to validate infrastructure plans.
Multiple Enforcement Levels: Policies can be set as advisory, soft-mandatory, or hard-mandatory.
Examples of Sentinel Policies
1. Restricting AWS Instance Types
Ensures that only specific EC2 instance types are allowed.
import "tfplan"
allowed_types = ["t2.micro", "t3.small"]
main = rule {
all tfplan.resources.aws_instance as instances {
all instances as instance {
instance.applied.instance_type in allowed_types
}
}
}
2. Enforcing Tagging Policy
Requires that all AWS resources have a mandatory Environment
tag.
import "tfplan"
main = rule {
all tfplan.resources.aws_instance as instances {
all instances as instance {
"Environment" in keys(instance.applied.tags)
}
}
}
3. Restricting S3 Bucket Public Access
Blocks S3 bucket creation if public access is enabled.
import "tfplan"
main = rule {
all tfplan.resources.aws_s3_bucket as buckets {
all buckets as bucket {
bucket.applied.acl != "public-read" and bucket.applied.acl != "public-read-write"
}
}
}
Sentinel Enforcement Modes
Advisory: Warns users but allows execution.
Soft Mandatory: Blocks execution but can be overridden.
Hard Mandatory: Strictly enforces policies with no override option.
4. You have a Terraform configuration file that defines an infrastructure deployment. However, multiple instances of the same resource need to be created. How would you modify the configuration file to achieve this?
How do you create multiple instances of a resource in Terraform?
In Terraform, you can create multiple instances of the same resource using either count or for_each.
1. Using count
The count
parameter allows you to create multiple instances of a resource by specifying a number.
Example: Creating Multiple EC2 Instances
resource "aws_instance" "example" {
count = 3 # Creates 3 instances
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "Instance-${count.index}"
}
}
How it works:
Terraform creates three EC2 instances, indexed as
0
,1
, and2
.Each instance gets a unique name, like Instance-0, Instance-1, or Instance-2.
2. Using for_each
The for_each
argument is useful when dealing with maps or sets, allowing you to assign specific values to each instance.
Example: Creating Instances with Different Instance Types
variable "instances" {
type = map(string)
default = {
"web" = "t2.micro"
"app" = "t3.small"
"db" = "t3.medium"
}
}
resource "aws_instance" "example" {
for_each = var.instances
ami = "ami-0c55b159cbfafe1f0"
instance_type = each.value
tags = {
Name = each.key
}
}
How it works:
Creates instances with different names (
web
,app
,db
).Assigns different instance types to each.
Choosing Between count
and for_each
Feature | count | for_each |
Best for lists | ✅ | ❌ |
Best for maps | ❌ | ✅ |
Dynamic values | ❌ | ✅ |
Simple numbering | ✅ | ❌ |
5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?
A. Set the environment variable TF_LOG=TRACE
B. Set verbose logging for each provider in your Terraform configuration
C. Set the environment variable TF_VAR_log=TRACE
D. Set the environment variable TF_LOG_PATH
Correct Answer: A. Set the environment variable TF_LOG=TRACE
The TF_LOG
environment variable controls Terraform's logging level. Setting TF_LOG=TRACE
enables the highest level of logging, which provides detailed debug messages, including information on provider paths.
How to Enable Debug Logging in Terraform
Run the following command in your terminal before executing Terraform commands:
export TF_LOG=TRACE
terraform apply
This will output detailed logs showing where Terraform is loading providers from.
Explanation of Other Options:
B. Set verbose logging for each provider in your Terraform configuration
Terraform does not support a built-in way to enable verbose logging for individual providers within configuration files. Debugging is done viaTF_LOG
.C. Set the environment variable
TF_VAR_log=TRACE
TF_VAR_
is used to pass input variables into Terraform, not for logging.D. Set the environment variable
TF_LOG_PATH
This option is partially correct.TF_LOG_PATH
specifies a file to store logs, but it does not enable logging itself. It must be used along withTF_LOG
:export TF_LOG=TRACE export TF_LOG_PATH=terraform.log terraform apply
6. The below command will destroy everything that is being created in the infrastructure. Tell us how you would save any particular resource while destroying the complete infrastructure.
terraform destroy
When running terraform destroy
It removes all resources defined in your Terraform configuration. However, if you want to preserve specific resources while destroying the rest of the infrastructure, you can use one of the following methods:
1. Use -target
Option (Selective Destruction)
If you want to destroy only specific resources while keeping others intact, you can use the -target
flag:
terraform destroy -target=aws_instance.example
This will destroy only the specified resource (aws_instance.example
) while keeping other resources untouched.
2. Manually Remove Resources from Configuration
You can remove specific resources from your Terraform configuration (.tf
files) before running terraform destroy
. Terraform will then only destroy the resources that are still in the configuration. However, this method is not practical if you plan to keep the resource permanently.
3. Use lifecycle
Block with prevent_destroy
To explicitly prevent Terraform from destroying a resource, add the prevent_destroy
lifecycle rule in the resource definition:
resource "aws_instance" "example" {
ami = "ami-12345678"
instance_type = "t2.micro"
lifecycle {
prevent_destroy = true
}
}
Now, when you run terraform destroy
Terraform will fail if it attempts to delete this resource.
4. Import the Resource After Destruction
If a resource has been destroyed but you want to retain it, you can manually recreate it in the cloud and import it into Terraform using:
terraform import aws_instance.example i-1234567890abcdef0
However, this is useful only if the resource was mistakenly deleted and needs to be restored.
5. Use terraform state rm
(Remove from State File)
You can remove a resource from Terraform's state file so it is no longer managed by Terraform:
terraform state rm aws_instance.example
After this, Terraform will not track the resource, and it will not be destroyed during terraform destroy
. However, it also means Terraform will no longer manage updates for this resource.
7. Which module is used to store the .tfstate file in S3?
The backend "s3"
module is used to store the .tfstate
file in an S3 bucket for remote state management.
Example Configuration
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "path/to/my-tfstate-file.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-lock"
}
}
Key Features:
Remote Storage: Stores the state file in S3.
State Locking: Uses DynamoDB to prevent concurrent modifications.
Encryption: Ensures security with SSE (Server-Side Encryption).
Versioning: Tracks changes when enabled in S3.
This setup ensures safe, scalable, and collaborative state management in Terraform.
8. How do you manage sensitive data in Terraform, such as API keys or passwords?
To securely manage sensitive data in Terraform, such as API keys, passwords, or secrets, follow these best practices:
1. Use terraform.tfvars
and .gitignore
Store sensitive values in a
terraform.tfvars
file.Add
terraform.tfvars
to.gitignore
to prevent accidental commits.
# terraform.tfvars (DO NOT commit this file)
api_key = "your-secret-api-key"
# .gitignore
terraform.tfvars
2. Use Environment Variables
Set sensitive data as environment variables to avoid hardcoding them in Terraform files.
export TF_VAR_api_key="your-secret-api-key"
Terraform will automatically use TF_VAR_api_key
when referenced in your configuration.
3. Use sensitive
Attributes in Variables
Mark variables as sensitive to prevent them from being displayed in logs or CLI output.
variable "api_key" {
type = string
sensitive = true
}
4. Use AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault
Store secrets in a secrets manager and retrieve them dynamically in Terraform. Example using AWS Secrets Manager:
data "aws_secretsmanager_secret_version" "example" {
secret_id = "my-secret"
}
output "api_key" {
value = data.aws_secretsmanager_secret_version.example.secret_string
sensitive = true
}
5. Use Encrypted Remote State (S3
+ DynamoDB
)
When using the S3 backend, enable encryption to protect the Terraform state file that may contain sensitive data.
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
}
}
6. Restrict IAM Permissions
Ensure that only authorized users have access to Terraform secrets by using least privilege IAM policies.
7. Use terraform output -json
with JQ
To safely retrieve sensitive values, use:
terraform output -json | jq '.api_key.value'
By following these best practices, you can securely manage sensitive data in Terraform.
9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?
To provision an S3 bucket and create a user with read and write access, you need the following Terraform resources:
Resources Used
aws_s3_bucket
– Creates an S3 bucket.aws_iam_user
– Creates an IAM user.aws_iam_policy
– Defines the S3 access policy.aws_iam_user_policy_attachment
– Attaches the policy to the user.
Terraform Configuration
provider "aws" {
region = "us-east-1"
}
# Create an S3 bucket
resource "aws_s3_bucket" "example" {
bucket = "my-terraform-bucket"
acl = "private"
}
# Create an IAM user
resource "aws_iam_user" "s3_user" {
name = "s3-bucket-user"
}
# Define an IAM policy for S3 read/write access
resource "aws_iam_policy" "s3_rw_policy" {
name = "s3-read-write-policy"
description = "Policy for read and write access to the S3 bucket"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
]
Resource = [
aws_s3_bucket.example.arn,
"${aws_s3_bucket.example.arn}/*"
]
}
]
})
}
# Attach the policy to the user
resource "aws_iam_user_policy_attachment" "s3_rw_attach" {
user = aws_iam_user.s3_user.name
policy_arn = aws_iam_policy.s3_rw_policy.arn
}
Explanation
Creates an S3 bucket (
aws_s3_bucket
) with a private ACL.Creates an IAM user (
aws_iam_user
) named"s3-bucket-user"
.Defines a policy (
aws_iam_policy
) that grants read (GetObject
,ListBucket
) and write (PutObject
,DeleteObject
) permissions to the bucket.Attach the policy (
aws_iam_user_policy_attachment
) to the user, allowing them access to the S3 bucket.
Next Steps
Retrieve IAM user credentials using
aws_iam_access_key
.Share credentials with the user to access the S3 bucket.
This configuration ensures secure and controlled access to the S3 bucket.
10. Who maintains Terraform providers?
Terraform providers are maintained by different entities, depending on the type of provider:
HashiCorp (Official Providers)
HashiCorp maintains core providers such as:
aws
(AWS Provider)azurerm
(Azure Provider)google
(Google Cloud Provider)kubernetes
(Kubernetes Provider)
These providers are developed, tested, and updated regularly by HashiCorp.
Third-Party Cloud and Service Providers
Some cloud providers and software vendors maintain their own Terraform providers. Examples:
datadog
(maintained by Datadog)newrelic
(maintained by New Relic)gitlab
(maintained by GitLab)
Community-Contributed Providers
Independent developers and open-source contributors maintain certain providers.
These are hosted in the Terraform Registry but are not officially supported by HashiCorp.
Partner Providers
- HashiCorp collaborates with technology partners who maintain providers under HashiCorp’s review process.
11. How can we export data from one module to another?
To export data from one Terraform module and use it in another, you can use module outputs and input variables.
Steps to Export Data Between Modules
1. Define an Output in the First Module
In the first module (moduleA
), define an output
variable for the data you want to export.
# modules/moduleA/outputs.tf
output "s3_bucket_name" {
value = aws_s3_bucket.example.bucket
}
2. Call the First Module and Use Its Output
In the root module (main.tf
), call moduleA
and reference its output when calling moduleB
.
module "moduleA" {
source = "./modules/moduleA"
}
module "moduleB" {
source = "./modules/moduleB"
bucket_name = module.moduleA.s3_bucket_name # Passing output from moduleA to moduleB
}
3. Use the Exported Data in the Second Module
In moduleB
, define an input variable to accept the exported data.
# modules/moduleB/variables.tf
variable "bucket_name" {
type = string
}
# modules/moduleB/main.tf
resource "aws_s3_bucket_object" "example" {
bucket = var.bucket_name
key = "example.txt"
content = "Hello from moduleB!"
}
Explanation
Module A creates an S3 bucket and exports its name via an
output
.Root module (
main.tf
) retrieves the output frommoduleA
and passes it as an input tomoduleB
.Module B accepts the
bucket_name
variable and uses it to create an object in the same bucket.
Subscribe to my newsletter
Read articles from Vanshika Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Vanshika Sharma
Vanshika Sharma
I am currently a B.Tech student pursuing Computer Science with a specialization in Data Science at I.T.S Engineering College. I am always excited to learn and explore new things to increase my knowledge. I have good knowledge of programming languages such as C, Python, Java, and web development.