Day 71 - Let's prepare for some interview questions of Terraform 🔥
Table of contents
1. What is Terraform and How is it Different from Other IaaC Tools?
Terraform is an open-source Infrastructure as Code (IaaC) tool developed by HashiCorp. It allows you to define and provision data center infrastructure using a declarative configuration language. Terraform manages both low-level components like compute instances, storage, and networking, as well as high-level components such as DNS entries and SaaS features.
Differences from Other IaaC Tools:
Declarative Syntax: Terraform uses a declarative approach, where you define the desired state of your infrastructure, and Terraform takes care of reaching that state.
Provider Ecosystem: Terraform supports a vast number of cloud providers and services through its provider ecosystem, making it highly versatile.
State Management: Terraform maintains a state file to keep track of the current state of infrastructure, which allows it to determine the actions needed to achieve the desired state.
Modularity: Terraform encourages modular infrastructure as code, allowing for reusable and composable modules.
Multi-Cloud Support: Terraform can manage resources across multiple cloud providers, which makes it suitable for multi-cloud deployments.
2. How Do You Call a Main.tf Module?
To call a module in Terraform, you create a module
block within your configuration files and specify the source of the module. Here is an example:
module "web_server" {
source = "./modules/web_server"
instance_name = "WebServer"
ami = "ami-08c40ec9ead489470"
instance_type = "t2.micro"
subnet_id = "subnet-12345678"
security_group = ["sg-12345678"]
}
In this example, source = "./modules/web_server"
points to the location of the module, and the other arguments are variables that the module requires.
3. What Exactly is Sentinel? Examples of Sentinel Policies
Sentinel is a policy as code framework used to enforce rules and policies in Terraform configurations. It allows organizations to define and enforce policies to ensure compliance, security, and governance.
Examples of Sentinel Policies:
Preventing Unapproved Instance Types:
import "tfplan" approved_types = ["t2.micro", "t2.small"] deny if length(tfplan.resource_changes) > 0 and not all tfplan.resource_changes as rc { rc.type == "aws_instance" and rc.change.after.instance_type in approved_types }
Ensuring Tags are Present:
import "tfplan" tags = ["Environment", "Owner"] deny if length(tfplan.resource_changes) > 0 and not all tfplan.resource_changes as rc { rc.type == "aws_instance" and all tags as tag { tag in keys(rc.change.after.tags) } }
Restricting Resource Locations:
import "tfplan" allowed_regions = ["us-east-1", "us-west-2"] deny if length(tfplan.resource_changes) > 0 and not all tfplan.resource_changes as rc { rc.type == "aws_instance" and rc.change.after.region in allowed_regions }
4. Modifying Configuration to Create Multiple Instances of the Same Resource
To create multiple instances of the same resource, you can use the count
or for_each
meta-arguments in the resource block.
Usingcount
:
resource "aws_instance" "server" {
count = 3
ami = "ami-08c40ec9ead489470"
instance_type = "t2.micro"
tags = {
Name = "Server ${count.index}"
}
}
Usingfor_each
:
locals {
ami_ids = toset([
"ami-0b0dcb5067f052a63",
"ami-08c40ec9ead489470",
])
}
resource "aws_instance" "server" {
for_each = local.ami_ids
ami = each.key
instance_type = "t2.micro"
tags = {
Name = "Server ${each.key}"
}
}
5. Enabling Debug Messages to Find Provider Loading Paths
To enable debug messages to find from which paths Terraform is loading providers, set the environment variable TF_LOG=TRACE
.
Correct Answer: A. Set the environment variable TF_LOG=TRACE
6. Saving a Particular Resource While Destroying Complete Infrastructure
To prevent a specific resource from being destroyed while running terraform destroy
, use the lifecycle
block with the prevent_destroy
attribute.
resource "aws_instance" "server" {
ami = "ami-08c40ec9ead489470"
instance_type = "t2.micro"
lifecycle {
prevent_destroy = true
}
}
7. Module Used to Store .tfstate File in S3
To store the Terraform state file in S3, you configure the backend as follows:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "path/to/my/terraform.tfstate"
region = "us-east-1"
}
}
8. Managing Sensitive Data in Terraform
To manage sensitive data such as API keys or passwords, you can use the following methods:
Environment Variables: Set sensitive values as environment variables.
Terraform Variables: Use
terraform.tfvars
files or-var-file
options to pass sensitive variables securely.Secret Management Services: Use secret management services like AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault.
Example using environment variables:
variable "db_password" {
description = "The password for the database"
type = string
sensitive = true
}
resource "aws_db_instance" "example" {
identifier = "example"
password = var.db_password
}
9. Provisioning an S3 Bucket and a User with Access
To provision an S3 bucket and a user with read and write access, you can use the following Terraform configuration:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
acl = "private"
versioning {
enabled = true
}
}
resource "aws_iam_user" "my_user" {
name = "my-user"
}
resource "aws_iam_policy" "s3_read_write_policy" {
name = "S3ReadWritePolicy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
]
Effect = "Allow"
Resource = "${aws_s3_bucket.my_bucket.arn}/*"
}
]
})
}
resource "aws_iam_user_policy_attachment" "my_user_policy_attachment" {
user = aws_iam_user.my_user.name
policy_arn = aws_iam_policy.s3_read_write_policy.arn
}
10. Who Maintains Terraform Providers?
Terraform providers are maintained by:
HashiCorp: Maintains official providers for major cloud platforms and services.
Community: Maintains various providers for niche or specialized services.
Third-party Vendors: Some providers are maintained by third-party vendors or service providers who integrate their services with Terraform.
11. Exporting Data from One Module to Another
To export data from one module to another, you use outputs and input variables. The output from one module can be used as an input variable in another module.
Module A (module_a
):
output "bucket_name" {
value = aws_s3_bucket.my_bucket.bucket
}
Module B (module_b
):
module "module_a" {
source = "./modules/module_a"
}
module "module_b" {
source = "./modules/module_b"
bucket_name = module.module_a.bucket_name
}
In this example, module_b
receives the bucket_name
output from module_a
and uses it as an input.
Subscribe to my newsletter
Read articles from Pooja Bhavani directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by