Foundational Security Best Practices for Terraform

Maxat AkbanovMaxat Akbanov
13 min read

This post shows important practices to ensure that your Terraform configuration remains secure.

Verify Modules and Providers

Modules and providers in Terraform function as external dependencies, much like software libraries or artifacts. Therefore, they should be managed with the same level of scrutiny. Verifying the integrity, source, and version of these dependencies helps prevent the use of unapproved configurations or, worse, malicious code. As a best practice, always explicitly specify the source and version of approved providers and modules in your Terraform configuration.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.98.0"
    }
  }
}

provider "aws" {
  # Configuration options for the provider
  region  = "us-east-1"
}

Use a Private Registry to Manage Terraform Modules and Providers

Many companies create a private registry to control which Terraform modules and providers their teams can use. This helps make sure that everyone is using approved, trusted versions. If your team already uses a repository (like the ones used for Docker images or software packages), you can connect it to Terraform by setting up a filesystem or network mirror for providers and implement the module registry API to create a minimal registry for modules.

💡
Terraform can use versioned modules from any service that implements the registry API.

You can use public modules shared by the community or build your own, but before using them in production, it’s important to review them carefully. Once reviewed, add the approved version to your private registry. This way, your team can use the same reliable module versions across projects, making everything safer and more consistent.

Pin Module Versions

⚠️ Be careful when using Terraform modules from outside sources

When you import a module, Terraform doesn’t check its security signature to make sure it hasn’t been changed. To stay safe, it’s best to use a Terraform registry to store your modules and specify (or "pin") the exact version you want to use. This helps make sure you're only using approved versions. If you're using modules from other places (like Git or a local path), you’ll need to include extra details in the URL to lock the version, since you can’t use the version setting directly.

For more information other module source types, see Module Sources.

Here are Terraform code examples showing how to safely import and pin modules from both a Terraform registry and an external source like Git:

Example 1: Using Terraform Registry (Best Practice)

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.0" # Pinning to an approved version

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}

Why this is safe: You're using the official Terraform registry and locking the module version.

Example 2: Using Git Source (Needs Extra Parameters to Pin Version)

module "vpc" {
  source = "git::https://github.com/terraform-aws-modules/terraform-aws-vpc.git?ref=v5.1.0"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}

⚠️ Important: When using Git, always add ?ref=version-or-tag to lock the module version. Without it, you'll always get the latest, which can change without warning.

For more information, see Generic Git Repository

How Terraform Keeps Your Providers Safe

When you run terraform init, Terraform creates a special file called the dependency lock file. This file helps keep track of which versions of providers (like AWS, Azure, etc.) you're using, and includes security checks called checksums to make sure nothing has been tampered with.

This approach is called "trust on first use" - Terraform assumes the first version you install is safe and then remembers it. The next time you (or someone on your team) run the configuration, Terraform checks that the provider version and checksum match what's in the lock file. If anything looks suspicious, it will warn you.

Here’s a small part of what this file might look like:

# This file is managed by Terraform and helps verify provider safety.
provider "registry.terraform.io/hashicorp/aws" {
  version     = "5.98.0"
  constraints = "~> 5.98.0"
  hashes = [
    "h1:neMFK/kP1KT6cTGID+Tkkt8L7PsN9XqwrPDGXVw3WVY=",
    ...
  ]
}

Important tip: Always include this file in version control (like Git). This ensures everyone on your team uses the same trusted provider versions.

If you're installing providers from a local folder or network (instead of the Terraform registry), you can use:

terraform providers lock

This command helps you fill in the correct checksums manually, based on your operating system.

In short: The lock file keeps your Terraform setups safe, consistent, and secure - especially in team or production environments.

Control Access to Cloud Service Providers and APIs

Terraform connects to cloud providers and other services using providers. To keep your credentials secure, don’t hard-code them directly into your Terraform files.

Instead, use variables marked as sensitive or environment variables. This helps prevent your credentials from being saved in the Terraform state file or accidentally shown in logs or outputs.

Configure least-privilege access for providers

In addition to protecting sensitive variables, it's important to make sure the credentials used by Terraform have only the minimum permissions needed. This is called "least-privilege access."

For example, if Terraform only needs to create an Amazon S3 bucket in a specific region, you should attach a policy to the AWS credentials that limits access to just that action and region. Here's what such a policy could look like:

data "aws_iam_policy_document" "terraform_s3" {
  statement {
    actions = [
      "s3:*",
    ]

    resources = [
      "arn:aws:s3:::${var.s3_bucket_name}-*",
      "arn:aws:s3:::${var.s3_bucket_name}-*/*",
    ]

    condition {
      test     = "StringEquals"
      variable = "aws:RequestedRegion"
      values = [
        "us-west-2"
      ]
    }
  }
}

resource "aws_iam_policy" "terraform_s3" {
  name        = "terraform-s3"
  description = "Allow Terraform to create, read, update, and delete a specific S3 bucket"

  policy = data.aws_iam_policy_document.terraform_s3.json
}

If you want Terraform to manage more resources, you can expand the policy to include permissions for other services and regions.

By using fine-grained access control, you reduce the risk of accidental changes and ensure that only approved Terraform operations have the permissions they truly need.

Use Separate Credentials for terraform plan and terraform apply

In some organizations - especially those in highly regulated industries - it’s important to follow strict security and access control policies. One such best practice is to use different credentials for the plan and apply stages of a Terraform workflow.

The reason is simple:

  • The terraform plan command only needs read-only access to gather information about the current infrastructure.

  • The terraform apply command, on the other hand, requires read and write access to actually make changes.

By using separate credentials, organizations can reduce the risk of unintended changes and ensure tighter control over who or what can modify infrastructure. For example, a CI/CD pipeline might use a read-only account to generate a plan, while a separate approval process is required to use elevated credentials for applying those changes.

This approach improves security, supports auditability, and helps meet compliance requirements by limiting the scope of access at each stage of the infrastructure lifecycle.

Use Dynamic Provider Credentials for Better Security

Instead of hard-coding access keys or using long-lived credentials, it's safer to use dynamic provider credentials - temporary credentials that are automatically generated each time Terraform runs. These credentials expire shortly after the operation, reducing the risk of misuse or leaks.

One popular way to do this is with HCP Terraform, which can generate a short-lived identity token using a standard called OIDC (OpenID Connect). When set up properly, this token is trusted by your cloud provider (like AWS), which responds by giving Terraform temporary access credentials. These credentials allow Terraform to perform only the actions it needs, and only for a limited time.

Here’s a simplified example of how it works with AWS:

  1. HCP Terraform creates an OIDC identity token.

  2. AWS is configured to trust this token using an OIDC provider and an IAM role.

  3. Terraform uses this temporary role to manage your AWS resources.

Here’s a code snippet showing part of this setup:

locals {
  hcp_terraform_url = "app.terraform.io"
}

data "tls_certificate" "hcp_terraform" {
  url = "https://${local.hcp_terraform_url}"
}

resource "aws_iam_openid_connect_provider" "hcp_terraform" {
  url             = data.tls_certificate.hcp_terraform.url
  client_id_list  = [var.hcp_terraform_aws_audience]
  thumbprint_list = [data.tls_certificate.hcp_terraform.certificates[0].sha1_fingerprint]
}

resource "aws_iam_role" "hcp_terraform" {
  name = "${var.name}-hcp-terraform"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = "sts:AssumeRoleWithWebIdentity",
        Effect = "Allow",
        Principal = {
          Federated = aws_iam_openid_connect_provider.hcp_terraform.arn
        },
        Condition = {
          StringEquals = {
            "${local.hcp_terraform_url}:aud" = var.hcp_terraform_aws_audience
          },
          StringLike = {
            "${local.hcp_terraform_url}:sub" = "organization:${var.hcp_terraform_organization}:project:${var.name}:workspace:*:run_phase:*"
          }
        }
      }
    ]
  })
}

To make this work, you’ll also need to set environment variables in your Terraform workspace with the correct values for audience and organization.

But HCP Terraform isn’t your only option. You can also:

  • Use Vault, which can generate temporary cloud credentials on demand.

  • Use CI/CD pipelines like GitHub Actions or GitLab, which can assume cloud roles dynamically using their built-in identity systems.

Dynamic credentials improve security by avoiding hard-coded secrets and by giving Terraform only the access it needs, when it needs it, and nothing more.

Creating Secrets Safely with Terraform

Sometimes, you might need Terraform to generate an initial admin or root password - for example, when setting up a new database or service. If you plan to rotate (change) that password later, it’s important to handle it carefully during the first setup.

By default, Terraform stores values like passwords in the state file, which can be risky if the secret is sensitive and long-lived. To reduce this risk, check if the provider or resource supports ephemeral values - temporary values that won’t be saved in the state or shown in the plan.

Here’s an example using an ephemeral password for an AWS Secrets Manager secret:

ephemeral "random_password" "bedrock_database" {
  length  = 16
  special = false
}

resource "aws_secretsmanager_secret" "bedrock_database" {
  name_prefix             = "${var.name}-bedrock-database-"
  recovery_window_in_days = 7
}

resource "aws_secretsmanager_secret_version" "bedrock_database" {
  secret_id = aws_secretsmanager_secret.bedrock_database.id
  secret_string_wo = jsonencode({
    username = "bedrock_user",
    password = ephemeral.random_password.bedrock_database.result
  })
  secret_string_wo_version = 1
}

In this setup:

  • The password is generated at runtime using an ephemeral resource.

  • It is never saved in Terraform state.

  • It’s securely passed to AWS Secrets Manager, where it can be managed and rotated safely.

This approach keeps your secrets out of Terraform files and helps follow best practices for managing sensitive data.

Use an External Secrets Manager When Ephemeral Resources Aren’t Available

If your Terraform provider doesn’t support ephemeral (temporary) resources, it’s safer to use an external secrets manager - like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault - to securely store and manage sensitive values such as passwords or API keys.

By referencing secrets from the external manager at runtime, you avoid storing them in Terraform state files. When it’s time to rotate (change) the secret, you simply update it in the secrets manager. The next time you run Terraform, it will automatically fetch the latest value.


Example using Vault to get a database password:

data "vault_generic_secret" "db_credentials" {
  path = "secret/data/db"
}

resource "aws_db_instance" "example" {
  identifier = "my-db"
  engine     = "mysql"
  username   = data.vault_generic_secret.db_credentials.data["username"]
  password   = data.vault_generic_secret.db_credentials.data["password"]
  # other db settings...
}

In this setup:

  • The username and password are pulled directly from Vault during the Terraform run.

  • No sensitive values are written to the Terraform state file.

  • You can rotate the secret in Vault anytime without needing to modify your Terraform code.

Using an external secrets manager gives you stronger security, audit logs, and a centralized way to manage and rotate secrets across your infrastructure.

Use the sensitive Function to Protect Secrets in Terraform

Sometimes, Terraform may accidentally display secrets - like passwords or API keys - in its plan or logs, especially if they’re passed as plain strings inside other values. For example, if you include a hard-coded database password in a connection string, Terraform might not realize it contains sensitive information and could print it out in the console during terraform plan.

To avoid this kind of accidental exposure, you should:

  • Pass secrets as input or output variables and mark them as sensitive = true.

  • For any computed or derived values that contain secrets, wrap them with the sensitive() function. This tells Terraform not to show them in logs or plans.


Example: Protecting user data in an EC2 instance

resource "aws_instance" "example_instance" {
  ami = data.hcp_packer_artifact.packer.external_identifier

  # other settings...

  user_data = sensitive(base64encode(file("./setup.sh")))
}

In this case, the contents of the setup script (setup.sh) are base64 encoded and marked as sensitive, so Terraform won’t show them in the plan or logs.

Using the sensitive function helps ensure that sensitive values stay hidden, keeping your infrastructure more secure.

Keep Your Terraform State File Secure

Terraform’s state file stores details about your infrastructure - like resource IDs, metadata, and sometimes even sensitive information such as passwords or connection strings. That’s why it’s important to treat the state file like a sensitive system file.

To protect it:

  • Store it remotely using a backend like AWS S3, Terraform Cloud, or another secure storage option.

  • Limit access so only your CI system or tools like HCP Terraform can read or update it.

Giving too many people or systems access to the state file can be risky. It might expose sensitive data or cause drift, where resources change outside of Terraform’s control. Also, making manual edits to the state file can easily break things - so avoid direct edits.

Instead, if you need to bring existing resources under Terraform control, use the import block. This lets you import them into your configuration without running manual CLI commands for each resource.


Example: Importing an S3 bucket and its settings into Terraform

import {
  to = aws_s3_bucket.example
  id = "test-20250613142230914900000001"
}

import {
  to = aws_s3_bucket_ownership_controls.example
  id = "test-20250613142230914900000001"
}

import {
  to = aws_s3_bucket_acl.example
  id = "test-20250613142230914900000001"
}

Then define your resources like this:

resource "aws_s3_bucket" "example" {
  bucket_prefix = "${var.s3_bucket_name}-"
  force_destroy = true
}

resource "aws_s3_bucket_ownership_controls" "example" {
  bucket = aws_s3_bucket.example.id
  rule {
    object_ownership = "BucketOwnerPreferred"
  }
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket_ownership_controls.example.bucket
  acl    = "private"
}

Running terraform apply will automatically import these resources:

$ terraform apply

aws_s3_bucket.example: Importing...
aws_s3_bucket_acl.example: Importing...
aws_s3_bucket_ownership_controls.example: Importing...
...
Apply complete! Resources: 4 imported, 0 added, 2 changed, 0 destroyed.

Best Practices Recap:

  • Use remote state storage with limited access.

  • Don’t edit state manually - use moved, import, or removed blocks instead.

  • Keep all changes in version control and apply them through Terraform runs only.

By keeping the state file secure and using Terraform’s built-in tools for managing existing resources, you reduce risks and keep your infrastructure under full control.

Apply Policies as Code to Prevent Misconfigurations

One of the biggest security risks when using Terraform is accidental misconfiguration - for example:

  • Creating public-facing S3 buckets

  • Opening up network ports to the internet

  • Deploying unencrypted databases or queues

To prevent mistakes like these, organizations use policy as code. This means writing rules (policies) that automatically check Terraform plans before any changes are applied to your infrastructure.

What is Sentinel?

Sentinel is HashiCorp’s policy-as-code framework, designed to work with Terraform Cloud and Terraform Enterprise. It lets you define rules using a programming language (similar to HCL) to enforce security, compliance, or operational policies before resources are deployed.

❗ Note: Sentinel is not free. It is included with Terraform Cloud’s Team & Governance tier and Terraform Enterprise. If you’re using open-source Terraform, you can use open-source alternatives like OPA (Open Policy Agent).


How Sentinel Works

Sentinel can inspect the planned changes and enforce rules like “only allow specific instance types” or “ensure storage buckets are not public.” Here’s a simple example:

import "tfplan/v2" as tfplan

# Get all EC2 instances that are being created or updated
ec2_instances = filter tfplan.resource_changes as _, rc {
  rc.type is "aws_instance" and
  (rc.change.actions contains "create" or rc.change.actions is ["update"])
}

# List of allowed EC2 instance types
allowed_types = [
  "t2.micro",
  "t2.small",
  "t2.medium",
]

# Rule: Only allow approved instance types
instance_type_allowed = rule {
  all ec2_instances as _, instance {
    instance.change.after.instance_type in allowed_types
  }
}

# Main rule: The policy passes only if instance_type_allowed is true
main = rule {
  instance_type_allowed else true
}

Why Use Policy as Code?

Using tools like Sentinel helps your organization:

  • Enforce security standards consistently

  • Reduce the risk of human error

  • Ensure infrastructure is compliant before it's deployed

  • Establish reusable policies that apply across multiple teams

Common use cases include:

  • Requiring encryption for databases and storage

  • Blocking public access to cloud resources

  • Enforcing naming conventions or tagging rules

  • Limiting resource types or regions


Summary

  • Sentinel is a policy-as-code tool by HashiCorp used with Terraform Cloud (Team & Governance tier) or Terraform Enterprise.

  • It helps enforce secure and compliant infrastructure by reviewing Terraform plans before deployment.

  • For open-source users, consider OPA as a free alternative.

  • Writing and applying standard policies makes infrastructure more secure, reliable, and consistent across teams.

References

0
Subscribe to my newsletter

Read articles from Maxat Akbanov directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Maxat Akbanov
Maxat Akbanov

Hey, I'm a postgraduate in Cyber Security with practical experience in Software Engineering and DevOps Operations. The top player on TryHackMe platform, multilingual speaker (Kazakh, Russian, English, Spanish, and Turkish), curios person, bookworm, geek, sports lover, and just a good guy to speak with!