Day71: Interview questions of Terraform ✨✨🔥

Siri ChandanaSiri Chandana
12 min read

What is Terraform, and how is it different from other IaaC tools?

Terraform is an open-source infrastructure as code (IaC) technology created by HashiCorp. It enables you to design and manage your infrastructure resources declaratively using a simple and user-friendly configuration language. Terraform provides you to provide and manage a wide variety of resources, including as virtual machines, storage accounts, networks, and more, across many cloud providers and infrastructure platforms.

Here are some key features and aspects that differentiate Terraform from other IaC tools:

  1. Multi-Cloud Support: Terraform provides support for multiple cloud providers, including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and others. This enables you to manage your infrastructure resources consistently across different cloud environments.

  2. Declarative Configuration Language: Terraform uses a declarative language called HashiCorp Configuration Language (HCL) to define infrastructure resources and their configurations. HCL is designed to be human-readable and allows you to express complex infrastructure topologies and dependencies in a concise manner.

  3. Resource Graph and Dependency Management: Terraform builds a resource graph based on the declared configuration, which represents the dependencies between different resources. It intelligently determines the order in which resources need to be created or modified based on their dependencies, ensuring that resources are provisioned in the correct sequence.

  4. Plan and Apply Workflow: Terraform follows a two-step process: "terraform plan" and "terraform apply." The "plan" command generates an execution plan that shows the changes Terraform will make to your infrastructure before actually applying them. This allows you to review and validate the changes before applying them to your environment.

  5. State Management: Terraform maintains a state file that keeps track of the resources it manages. The state file records the mapping between your configuration and the resources created in the infrastructure. This allows Terraform to manage and update resources incrementally without destroying and recreating them.

How do you call a main.tf module?

In Terraform, the main.tf file is typically used to define the main configuration for a Terraform module. However, it's important to note that in Terraform, a module is not called directly like a function or method in traditional programming languages. Instead, a module is instantiated or used within a Terraform configuration.

To use a module defined in a main.tf file, you need to create a separate Terraform configuration file (usually with a .tf extension) that references the module. In this separate configuration file, you can declare the module and provide input values if necessary.

Here's an example of how you would reference and use a module defined in a main.tf file:

  1. Create a main.tf file: Inside this file, define your module, its inputs, and its resources. For example:
# main.tf

module "example_module" {
  source = "./example_module"
  input_variable = "some value"
}

resource "aws_instance" "example_instance" {
  ami           = module.example_module.output_variable
  instance_type = "t2.micro"
}
  1. Create a separate configuration file (e.g., main_config.tf): In this file, you would reference the module and provide any required input values:
# main_config.tf

provider "aws" {
  region = "us-west-2"
}

module "example" {
  source = "./path/to/module_directory"
  input_variable = "custom value"
}

Once you have both the main.tf file and the separate configuration file, you can run Terraform commands such as terraform init, terraform plan, and terraform apply in the same directory as the configuration files to initialize and apply the Terraform configuration that uses the module.

What exactly is a Sentinel? Can you provide a few examples of where we can use Sentinel policies?

Sentinel is a policy-as-code framework developed by HashiCorp. It enables you to define and enforce policies that govern the behavior of your infrastructure provisioning and deployment processes. Sentinel allows you to codify your organization's requirements, best practices, and compliance rules as policies, which can be integrated into various HashiCorp tools such as Terraform, Vault, and Consul.

Here are a few examples of where you can use Sentinel policies:

  1. Infrastructure Deployment Policies: You can create policies to ensure that all resources are tagged with specific metadata, that certain security configurations are applied to instances, or that only approved AMIs (Amazon Machine Images) are used.

  2. Compliance and Security Policies: For instance, you can define policies that ensure encryption is enabled for data at rest, that specific network security groups are applied, or that only authorized IAM roles are assigned to resources.

  3. Cost Optimization Policies: Sentinel can help you enforce cost optimization practices by defining policies that prevent the deployment of certain resource types or configurations that are known to be expensive.

You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this?

To create multiple instances of the same resource in Terraform, you can use resource "count" or resource "for_each," depending on your specific requirements and Terraform version.

  1. Using resource "count":
    If you want to create a fixed number of instances, you can use the resource "count" meta-argument. Here's an example of how you can modify your Terraform configuration file:
resource "aws_instance" "example_instance" {
  count         = 3
  ami           = "ami-12345678"
  instance_type = "t2.micro"
  # Other resource configuration properties...
}

In this example, three instances of the "aws_instance" resource will be created. The instances will be named aws_instance.example_instance[0], aws_instance.example_instance[1], and aws_instance.example_instance[2]. You can reference these instances individually using their index,

e.g., aws_instance.example_instance[0].id.

  1. Using resource "for_each":
    If you want more flexibility and need to create instances based on a map or set of key-value pairs, you can use the resource "for_each" meta-argument.

    Here's an example:

variable "instances" {
  type = map
  default = {
    "instance1" = "ami-12345678"
    "instance2" = "ami-87654321"
  }
}

resource "aws_instance" "example_instance" {
  for_each = var.instances

  ami           = each.value
  instance_type = "t2.micro"
  # Other resource configuration properties...
}

In this example, two instances of the "aws_instance" resource will be created based on the "instances" variable, which is a map. The instances will be named aws_instance.example_instance["instance1"] and aws_instance.example_instance["instance2"]. You can reference these instances individually using their keys,

e.g., aws_instance.example_instance["instance1"].id.

Remember to run terraform apply to apply the changes and create the specified number of instances or instances defined in the map.

By using either the count or for_each meta-arguments, you can dynamically create multiple instances of the same resource based on your requirements.

You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?

A. Set the environment variable TF_LOG=TRACE

B. Set verbose logging for each provider in your Terraform configuration

C. Set the environment variable TF_VAR_log=TRACE

D. Set the environment variable TF_LOG_PATH

The correct option to enable debug messages and find out from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files) is:

A. Set the environment variable TF_LOG=TRACE

By setting the TF_LOG environment variable to TRACE, Terraform will output detailed debug messages, including information about the paths from which providers are being loaded.

bash
export TF_LOG=TRACE

For Windows (Command Prompt):

cmd
set TF_LOG=TRACE

For Windows (PowerShell):

powershell
$env:TF_LOG="TRACE"

How would you save any particular resource while destroying the complete infrastructure?

When executing the terraform destroy command, it will remove all the resources created by Terraform and destroy the infrastructure. If you want to save a particular resource from being destroyed, you can use Terraform's resource lifecycle management feature.

To save a specific resource, you can modify the resource block in your Terraform configuration file by adding the lifecycle block and setting the prevent_destroy argument to true. This prevents Terraform from destroying that specific resource during the execution of terraform destroy.

Here’s an example of how you can save a resource from being destroyed:

resource "aws_instance" "example" {
  # Resource configuration...

  lifecycle {
    prevent_destroy = true
  }
}

Which module is used to store .tfstate file in S3?

In Terraform, the module used to store the .tfstate file in Amazon S3 is called terraform_remote_state. This module allows you to configure Terraform to store its state file remotely in a shared storage system like S3, instead of locally on disk.

Here's an example of how you might use the terraform_remote_state module in your Terraform configuration:

hclCopy codeterraform {
  backend "s3" {
    bucket         = "your-bucket-name"
    key            = "path/to/your-state-file.tfstate"
    region         = "your-region"
    dynamodb_table = "terraform_locks"
  }
}

module "example" {
  source = "path/to/your/module"

  # You can reference values from the remote state here
  some_value = "${terraform_remote_state.example.outputs.some_value}"
}

In this example:

  • The terraform block configures Terraform to use the S3 backend for storing the state file. You specify the S3 bucket, the path to the state file (key), the AWS region, and optionally, a DynamoDB table for state locking.

  • The module block references a module in your configuration. Within this module, you can use ${terraform_remote_state.example.outputs.some_value} to access output values from the remote state stored in S3.

By using terraform_remote_state, you can centralize the storage of your Terraform state files, enabling collaboration and ensuring consistency across your infrastructure deployments.

How do you manage sensitive data in Terraform, such as API keys or passwords?

Managing sensitive data, such as API keys or passwords, in Terraform involves implementing best practices for handling secrets securely.

Here are some common approaches:

  1. Use Environment Variables: Avoid hardcoding sensitive information directly into your Terraform configuration files. Instead, use environment variables to pass sensitive data to Terraform. This keeps secrets out of version control and provides an extra layer of security.

  2. Terraform Input Variables: Define input variables in your Terraform configuration for sensitive data, and prompt users to input values interactively or via variable files during runtime. This allows you to keep sensitive information out of your configuration files and version control system.

  3. HashiCorp Vault: Integrate Terraform with HashiCorp Vault, a tool designed for managing secrets and sensitive data. Vault provides a secure storage and management solution for secrets, allowing Terraform to retrieve them dynamically during runtime. This ensures that secrets are never stored in plain text and are accessed securely.

  4. External Secret Management Systems: Utilize external secret management systems, such as AWS Secrets Manager or Azure Key Vault, to store and manage sensitive data. Terraform can then retrieve secrets from these systems dynamically when needed during deployment.

  5. Encrypted Data Sources: Leverage encrypted data sources in Terraform to store sensitive information securely. Terraform supports encrypted values using tools like sops or git-crypt, allowing you to encrypt sensitive data in configuration files and decrypt it during deployment.

  6. Access Controls: Implement strict access controls and permissions for Terraform configurations and state files to prevent unauthorized access to sensitive data.

You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?

To provision an S3 bucket and a user with read and write access to the bucket using Terraform, you would typically use the following AWS resources:

  1. AWS S3 Bucket: This resource represents the S3 bucket you want to provision. You would configure properties such as the bucket name, access control, and any other desired settings.

  2. AWS IAM User: This resource represents the IAM user who will have read and write access to the S3 bucket. You would configure permissions for this user using an IAM policy that grants the necessary permissions for S3 bucket access.

Here's an example of a Terraform configuration to achieve this:

# Define the provider block for AWS
provider "aws" {
  region = "us-west-2" # Change to your desired region
}

# Create an S3 bucket
resource "aws_s3_bucket" "example_bucket" {
  bucket = "example-bucket-name"  # Specify your desired bucket name
  acl    = "private"               # Set the bucket ACL (Access Control List) to private by default
}

# Create an IAM user
resource "aws_iam_user" "example_user" {
  name = "example-user"  # Specify your desired username
}

# Create an IAM policy for S3 bucket access
resource "aws_iam_policy" "s3_access_policy" {
  name        = "s3-access-policy"
  description = "Policy for granting read and write access to S3 bucket"

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect    = "Allow",
        Action    = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:DeleteObject"
        ],
        Resource  = [aws_s3_bucket.example_bucket.arn, "${aws_s3_bucket.example_bucket.arn}/*"]
      }
    ]
  })
}

# Attach the IAM policy to the IAM user
resource "aws_iam_user_policy_attachment" "s3_access_attachment" {
  user       = aws_iam_user.example_user.name
  policy_arn = aws_iam_policy.s3_access_policy.arn
}
  • The aws_s3_bucket resource provisions an S3 bucket named "example-bucket-name" with private access control.

  • The aws_iam_user resource creates an IAM user named "example-user."

  • The aws_iam_policy resource defines a policy named "s3-access-policy" that grants permissions for GetObject, PutObject, and DeleteObject actions on, the specified S3 bucket.

  • The aws_iam_user_policy_attachment resource attaches the IAM policy to the IAM user, granting them read and write access to the S3 bucket.

Who maintains Terraform providers?

Terraform providers are maintained by both HashiCorp, the company behind Terraform, and the community. HashiCorp develops and maintains many of the official Terraform providers, including those for major cloud providers like AWS, Azure, and Google Cloud Platform, as well as providers for other infrastructure components like Docker, Kubernetes, and GitHub.

In addition, there is a vibrant community of contributors who create and maintain Terraform providers for many services and platforms that HashiCorp may not officially support. These community-maintained providers are frequently hosted on sites such as GitHub and are open to community feedback and contributions.

How can we export data from one module to another?

In Terraform, you can export data from one module to another using output variables. Output variables allow you to expose certain values from a module so that they can be consumed by other modules or referenced in the root configuration.

Here's a basic example of how you can export data from one module and consume it in another:

Module 1 (exporting data):

 # module1/main.tf

resource "aws_instance" "example" {
  # Configuration for creating an EC2 instance...
}

output "instance_id" {
  value = aws_instance.example.id
}

In this example, the aws_instance.example.id is being exported as an output variable named instance_id.

Module 2 (importing data):

hclCopy code# module2/main.tf

module "module1" {
  source = "../module1"
}

resource "aws_security_group_rule" "example" {
  # Configuration for creating a security group rule...
  # You can reference the exported output from module1 here
  source_security_group_id = module.module1.instance_id
}

In this example, the instance_id output variable from module1 is being referenced within module2. This allows you module2 to access the value exported by module1.

When you apply these configurations, Terraform will automatically handle the dependency between the modules, ensuring that module2 waits for module1 to be created and its output variables to be available before proceeding.

By using output variables in this way, you can effectively export data from one module and import it into another, enabling modular and reusable Terraform configurations.

Summary: Terraform is an open-source IaC tool by HashiCorp, offering multi-cloud support, declarative configuration language, resource graph management, and state management. . It facilitates provisioning and managing your infrastructure on-prem and in the cloud. It can be easily extended with the help of its plugin-based architecture. Terraform can be connected with different infrastructure hosts and achieve complex management scenarios and compliance across multiple clouds.

Thank you for 📖reading my blog. 👍 Like it and share it 🔄 with your friends . Hope you find it helpful🤞

Happy learning😊😊

0
Subscribe to my newsletter

Read articles from Siri Chandana directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Siri Chandana
Siri Chandana