How to Set Up a Kubernetes Cluster on AWS EKS Using Terraform.


Introduction
🔶In this blog, I’ll walk you through how to set up a Kubernetes cluster on AWS using Terraform. By combining the power of Infrastructure as Code (IaC) with AWS’s managed Kubernetes service (EKS), we can automate the provisioning of a scalable and production-ready cluster with ease.
The Terraform script I’ve shared takes care of creating all the necessary AWS resources—VPC, subnets, IAM roles, security groups, and the EKS cluster itself—allowing you to spin up your environment reliably and consistently with just a few commands.
Whether you're a DevOps enthusiast or a developer exploring Kubernetes deployment, this guide will help you understand the process from start to finish.
Prerequisites
Before diving in, make sure you're familiar with the following:
✅ AWS Console – Basic understanding of navigating and managing services in the AWS Management Console.
✅ Terraform Scripting – Ability to write and understand Terraform configurations (HCL).
✅ VS Code – Experience using Visual Studio Code or any preferred code editor.
✅ Kubernetes Concepts – Familiarity with core Kubernetes components like pods, nodes, deployments, and services.
Why Do We Need Kubernetes?
As applications grow in complexity and scale, managing containers manually becomes inefficient and error-prone. Kubernetes, an open-source container orchestration platform, addresses these challenges by automating deployment, scaling, and management of containerized applications.
Key reasons we need Kubernetes:
Automated Deployment & Scaling: Kubernetes handles the rollout and scaling of containerized applications automatically based on demand.
Self-healing: If a container crashes, Kubernetes replaces it instantly without human intervention.
Load Balancing: It efficiently distributes network traffic to maintain stable performance.
Service Discovery: Kubernetes automatically assigns IPs and DNS names to containers for seamless communication.
Resource Optimization: It intelligently schedules containers to make the most of your infrastructure.
In this project, we are going to use AWS EKS, which will serve as the crucial control plane for us.
💡Extra Knowledge
✅ How to Write an IP Address
An IPv4 address (the common one) looks like this:
192.168.1.1
It’s made up of 4 numbers, separated by dots. Each number is called an octet or block.
✅ What’s in Each Block?
Each block:
Is a number from 0 to 255
Represents 8 bits (since 1 byte = 8 bits, and 8 bits can represent numbers from 0 to 255)
So the full IP has 4 blocks × 8 bits = 32 bits total
Example:
IP: 192 . 168 . 1 . 1
↑ ↑ ↑ ↑
block block block block
Let’s get started
🔹Project Structure and File Overview
To keep the configuration modular and maintainable, I’ve structured the Terraform project using multiple
.tf
files, each serving a specific purpose:terraform.tf
This file contains the basic Terraform configuration, such as the required version and the backend (if configured). It acts as the entry point for initializing and managing the Terraform project.providers.tf
Here, I define the AWS provider configuration, including the region and access credentials. This tells Terraform which cloud platform to use and how to authenticate.eks.tf
(Main Configuration)
This is the heart of the project. It contains all the resources needed to provision the AWS EKS (Elastic Kubernetes Service) cluster — including the cluster itself, node groups, IAM roles, and relevant associations.locals.tf
This file is used to define local variables that are reused across the project. It helps to avoid repetition and keep values consistent (like region names, tags, or naming conventions).vpc.tf
The VPC configuration file is crucial for setting up a custom Virtual Private Cloud, including subnets, internet gateways, and route tables. This ensures that all EKS nodes stay connected and operate within the same secure network boundary.
terraform.tf
terraform{
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.93.0"
}
}
}
providers.tf
provider "aws" {
region = local.region
}
locals.tf
locals{
region = "us-east-1"
name = "my-eks-cluster" # EKS Cluster name
azs = ["us-east-1a", "us-east-1b"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]
intra_subnets = ["10.0.5.0/24", "10.0.6.0/24"]
env="dev" # Environment name
}
vpc.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "first-eks-cluster-vpc"
cidr = "10.0.0.0/16"
azs = local.azs
private_subnets = local.private_subnets
public_subnets = local.public_subnets
intra_subnets = local.intra_subnets
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = local.env
}
}
eks.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.31"
#cluster info
cluster_name = "${local.name}-vpc-eks"
cluster_version = "1.31"
cluster_endpoint_public_access = true
cluster_addons = {
vpc-cni ={
most-recent=true
}
cube-proxy ={
most-recent=true
}
core-dns ={
most-recent=true
}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
#specific for control plane network
control_plane_subnet_ids = module.vpc.intra_subnets
# EKS Managed Node Group(s) managing worker nodes in cluster
eks_managed_node_group_defaults = {
instance_types = ["t2.micro"]
attach_cluster_primary_security_group = true # Attach the cluster's primary security group to the node group in order to allow communication between the nodes and the control plane
}
eks_managed_node_groups = {
# Node group name
eks_cluster_ng = {
instance_types = ["t2.micro"]
min_size = 2
max_size = 3
desired_size = 2
capacity_type = "SPOT" # ON_DEMAND or SPOT
}
}
tags = {
Name = "${local.name}-vpc-eks"
Environment = local.env
}
}
- Now initialize terraform using init command
$ terraform init
→ output for terraform init
vikrant@LAPTOP-DK2OJLBK MINGW64 ~/OneDrive/Desktop/AWSprojs/demo/terraform_eks
$ terraform init
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/tls from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Reusing previous version of hashicorp/cloudinit from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/cloudinit v2.3.6
- Using previously-installed hashicorp/null v3.2.3
- Using previously-installed hashicorp/aws v5.93.0
- Using previously-installed hashicorp/tls v4.0.6
- Using previously-installed hashicorp/time v0.13.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now apply terraform plan command
$ terraform plan
→ output for terraform plan
Plan: 66 to add, 0 to change, 0 to destroy.
+ resource "null_resource" "validate_cluster_service_cidr" {
+ id = (known after apply)
}
Plan: 66 to add, 0 to change, 0 to destroy.
🎉Congratulations we have successfully created a Kubernetes cluster on AWS using AWS EKS and terraform .
Subscribe to my newsletter
Read articles from VIKRANT SARADE directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

VIKRANT SARADE
VIKRANT SARADE
I am a passionate and motivated Full Stack Developer with experience in Python, Django, Flask , AI/ML, React, and PHP frameworks like CodeIgniter. With a strong foundation in web development, databases, and cloud computing, I am eager to transition into an AWS DevOps role. I have hands-on experience with CI/CD pipelines, infrastructure automation, and deployment strategies. As a quick learner and problem solver, I am committed to building scalable and efficient cloud-based solutions. I am always open to new challenges and opportunities to expand my skills in cloud and DevOps technologies.