Terraform State Locking (Part 5)

Harshit SahuHarshit Sahu
5 min read

States

Complete Terraform Tutorial Part – 4 – Terraform State - Digital Varys

Terraform uses state to keep track of the infrastructure it manages. To use terraform effectively, you must keep your state accurate and secure.

State is necessary for requirements for Terraform to function. It is often asked if it is possible for Terraform to work without state, or for terraform to not use state and just inspect cloud resources on every run.

Terraform requires some sort of databases to map terraform config to the real world. Alongside the mapping between resources and remote objects, terraform must also track metadata such as resource dependencies. Terraform stores a cache of the attribute values for all resources in the state. This is done to improve performance.

For small infrastructure, terraform can query your providers and sync the latest attributes from all your resources. This is the default behaviour of Terraform: for every plan and application, terraform will sync all resources in your state.

For large infrastructure, querying every resource is too slow. Larger users of Terraform make heavy use of the -refresh=falseflag as well as the -targeflag to work around this. In these scenarios, the cached state is treated as the record of the truth.

State Locking

Terraform Remote States in S3 with GitHub Actions | Jobsity Blog

State locking happens automatically on all operations that could write state. You won’t see any message that it is happening. If state locking fails, terraform will not continue. You can disable state locking in most commands with the -lock flag but it is not recommended.

Terraform has force-unlock command to manually unlock the sate if unlocking failed.

terraform force-unlock [options] LOCK_ID [DIR]

Sensitive Data

Terraform state can contain sensitive data, e.g. database password, etc. When using a remote state, the state is only ever held in memory when used by Terraform.

The S3 backend supports encryption at rest when the encrypt option is enabled. IAM policies and logging can be used to identify any invalid access. Requests for the state go over a TLS connection.

Terraform State Management

  • Remote Backend

  • State Locking

Example

Suppose in an organization multiple people work on provisioning infrastructure at a time on AWS through Terraform. Now it might create conflicts also because two people are simultaneously changing the environment. So, to keep state file safe and lock we use remote backend and state locking. When multiple DevOps engineer work in a project then state management may be difficult.

  • Why we don’t commit Terraform state to Git

Because Terraform state can contain sensitive information which should not be stored in source control. Additionally, if Terraform executes on different state files (i.e. on two separate machines) it might break your Terraform setup.

  • The solution? Setup a Terraform remote backend like AWS S3, HashiCorp consul, Terraform cloud.

Go to your Instance

Create a folder for remote_infra

mkdir remote_infra
cd remote_infra

Create terraform.tf

#vi terraform.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.66.1"
    }
  }
}

Assign Provider

# vi provider.tf
provider "aws" {
    region = "ap-south-1"
}

Allocate Variables

# vi variables.tf
variable "state_bucket_name" {
    default = "terraweek-demo-state-bucket"
}

variable "state_table_name" {
    default = "terraweek-demo-state-table"
}

variable "aws_region" {
    default = "us-east-1"
}

Create Resource

# vi resources.tf
resource "aws_dynamodb_table" "my_state_table" {
    name = var.state_table_name
    billing_mode = "PAY_PER_REQUEST"
    hash_key = "LockID"
    attribute {
        name = "LockID"
        type = "S" # String
    }
    tags = {
        Name = var.state_table_name
    }
}

resource "aws_s3_bucket" "my_state_bucket" {
    bucket = var.state_bucket_name
    tags = {
        Name = var.state_bucket_name
    }
}

In order to connect to your AWS account and terraform, you need the access keys and secret access keys exported to your machine.

export AWS_ACCESS_KEY_ID=<access key>
export AWS_SECRET_ACCESS_KEY=<secret access key>

Terraform init

terraform init

Terraform plan

terraform plan

Terraform apply

terraform apply --auto-approve

Create a folder for remote_demo

mkdir remote_demo
cd remote_demo

Create terraform.tf

#vi terraform.tf
terraform {

required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.66.1"
    }
  }

backend "s3" {
    bucket = "terraweek-demo-state-bucket" 
    key = "terraform.tfstate"
    region = "ap-south-1"
    dynamodb_table = "terraweek-demo-state-table"
}
}

Assign Provider

# vi provider.tf
provider "aws" {
    region = "ap-south-1"
}

Create Resource

# vi resource.tf
resource "aws_instance" "aws_ec2_test" {

        ami = "ami-053b0d53c279acc90"
        instance_type = "t2.micro"
        tags = {
     Name = "test-instance"
  }
}

Terraform init

terraform init

Terraform Validate

terraform validate

Terraform plan

terraform plan

Terraform apply

terraform apply --auto-approve

You can check by ls and you observe there is no terraform.tfstate file.

Check on S3 service of AWS

Check locking on DynamoDB service of AWS → Explore table items → LockID

How to Lock Terraform State with S3 bucket in DynamoDB. | by ...

Run the command to get terraform state file in terminal

terraform state pull

Terraform State commands:

  • terraform state list: List resources within terraform state

  • terraform-state mv: Move items within terraform state. This will be used to resource renaming without destroy, apply command.

  • terraform state pull: Manually download and output the state from the state file.

  • terraform state push: Manually upload a local state file to the remote state.

  • terraform state rm: Remove items from the state. Items removed from the state are not physically destroyed. This item no longer managed by Terraform.

  • terraform state show: Show attributes of a single resource in the state.

0
Subscribe to my newsletter

Read articles from Harshit Sahu directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Harshit Sahu
Harshit Sahu

Enthusiastic about DevOps tools like Docker, Kubernetes, Maven, Nagios, Chef, and Ansible and currently learning and gaining experience by doing some hands-on projects on these tools. Also, started learning about AWS and GCP (Cloud Computing Platforms).