🚀 Solving Terraform State Problems with S3 and DynamoDB — Full Guide

When managing infrastructure as code (IaC) using Terraform, everything revolves around the state file (terraform.tfstate
).
It keeps track of every resource Terraform manages. But as you scale your projects, the way you handle this state becomes critical.
Let’s explore why local state can become your biggest nightmare and how S3 + DynamoDB can make it super-safe and team-friendly!
💣 The Real Problem: Why Local Terraform State is Dangerous
Initially, when you're working solo on a small project, keeping .tfstate
locally seems fine.
But as soon as:
You have multiple team members
Your infrastructure grows complex
You automate with CI/CD pipelines
Then local state causes serious issues like:
🔥 Single Point of Failure
If your local .tfstate
is deleted or corrupted, Terraform forgets about the resources it created.
Suddenly, it might think nothing exists — leading to accidental recreation or deletion of real-world infrastructure!
🔥 No Team Collaboration
If multiple developers work with different local copies of the .tfstate
, the states get out of sync.
Terraform doesn't know the real current state of AWS/Azure/GCP.
🔥 Conflicts and Race Conditions
Two team members doing terraform apply
at the same time can overwrite or corrupt the state.
Example: Developer A creates a new EC2 instance, Developer B deletes an S3 bucket — both working on different .tfstate
versions.
🔥 No Backup or Versioning
If mistakes happen, you can’t roll back unless you manually backed up the file — which no one does consistently!
🛑 In short: Local state is risky for production-grade systems.
🎯 The Solution: Remote Backend + State Locking
Terraform allows storing the state remotely using a backend.
The best combination in AWS is:
Service | Purpose |
S3 Bucket | To store and version the state file |
DynamoDB Table | To handle locking and prevent simultaneous updates |
This setup offers:
✅ Safe Storage
✅ Easy Recovery
✅ Collaborative Workflows
✅ Automatic Conflict Prevention
📚 Setting It Up: Step-by-Step
Let’s walk through setting up an S3 backend and DynamoDB table for remote state management and locking.
📦 Step 1: Create an S3 Bucket for State Storage
resource "aws_s3_bucket" "remote_s3" {
bucket = "bucket_name"
tags = {
Name = "bucket_name"
}
}
👉 Best Practices for the S3 Bucket:
Enable Versioning to recover accidentally deleted or modified state files.
Enable Server-Side Encryption for security.
Use Bucket Policies to restrict access.
📄 Step 2: Create a DynamoDB Table for State Locking
resource "aws_dynamodb_table" "basic-dynamodb-table" {
name = "table_name"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
Name = "table_name"
}
}
👉 Important Points about DynamoDB Table:
LockID
must be String (S) type.DynamoDB will create a lock entry whenever someone runs
terraform apply
, avoiding conflicts.Automatically releases the lock once done or if there’s a failure.
🔧 Step 3: Configure Terraform Backend
In the terraform.tf file where we will install the provider, we will add the following code..
terraform {
backend "s3" {
bucket = "bucket_name"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "table_name"
}
}
This tells Terraform:
Save
terraform.tfstate
in bucket_nameUse DynamoDB locking through table_name
Operate everything inside
us-east-2
AWS region
🔄 Deep Dive: How S3 + DynamoDB Work Together?
Let’s break it down:
Step | Action |
1 | User runs terraform plan/apply . |
2 | Terraform checks the DynamoDB table for a lock. |
3 | If no lock, Terraform writes a lock record. |
4 | Terraform reads the latest state from S3. |
5 | Applies changes, updates the S3 bucket. |
6 | Removes the lock from DynamoDB after finishing. |
If someone else tries apply
while locked?
They will get a clear error:Error: Error acquiring the state lock.
and Terraform will wait.
⚠️ What Happens If You Don't Set This Up?
You can lose your state with no recovery.
Different team members might overwrite each other’s changes.
Conflicts lead to broken infra (or worse, downtime!).
🧠 Common Problems with State Files (And How to Fix Them)
Problem | Solution |
Accidental State Deletion | S3 versioning allows recovery. |
Conflicting Terraform Applies | DynamoDB lock ensures only one apply at a time. |
Manual edits to .tfstate | S3 + backup versions allow rollback. |
Losing local .tfstate | No problem — it’s stored centrally in S3. |
🌟 Advanced Best Practices
Use S3 Bucket Lifecycle Rules to automatically delete old state versions after X days.
Enable CloudTrail on S3 to track who modified the state.
Use IAM Roles and Policies to restrict access to the S3 bucket and DynamoDB table.
Encrypt the S3 bucket and DynamoDB at rest using KMS keys.
Use Terraform Workspaces if you manage multiple environments like dev/staging/prod.
🎯 Final Thoughts
Setting up remote backend with S3 and state locking with DynamoDB is not optional — it’s mandatory for serious Terraform projects.
It transforms your Terraform from:
Risky solo scripts ➔ To ✨ production-ready, team-safe deployments.
Fragile local state ➔ To 🛡️ reliable, centralized state management.
It’s a little setup initially but saves hours of future debugging, conflicts, and disasters!
My LinkedIn Profile-https://www.linkedin.com/in/binereet-singh-9a7685316/
Subscribe to my newsletter
Read articles from BinereetDevops directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
