Step by step on how to create Amazon EKS with Terraform


Elastic Kubernetes Service (EKS)
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Setting up and self managing kubernetes clustera is a tedious job. AWS decided to simplify deployment, scaling, and operation of Kubernetes clusters by creating their own kubernetes cluster known as Elastic Kubernetes Service(EKS) managed by their team. All the customer need is to setup EKS and deploy their application.
Terraform
Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It allows you to define, provision, and manage cloud infrastructure using a declarative configuration language called HCL (HashiCorp Configuration Language). Every cloud provider has their own IaC like aws has Cloud Formation but terraform is open source and can interact with different cloud providers
Prerequisite
AWS account
AWS cli
Terraform
Visual studio Code
eksctl
Objective
Setup aws credentials
Create EKS with terraform
Create Remote backend
Modularize our terraform code
Create State Locking using Dynamo DB
Clean up
We will start with creating aws credentials. If you don’t know how to create aws credential refer to this article here where I discussed in depth on how to create aws credentials.
The next stage is to configure aws credentials on our command line. Before you configure your credentials ensure that you have installed aws cli. you can refer to this article AWS CLI to install aws cli depending on your operating system.
To configure aws credentials
Important extensions that can improve productivity
HashiCorp Terraform
GitHub Copilot
Most companies modularize their terraform code. The reason is to
Avoid Redundancy – Prevent repetitive code and improve efficiency
Enhance Reusability – Allow easy reuse of infrastructure components.
Improve Accessibility – Enable seamless collaboration and management
Maintain Clean Code – Ensure a structured and maintainable codebase
In order to modularize our terraform code we will create 2 folders inside the root directory
backend
modules
Then inside modules
EKS
VPC
File structure
Next step is to create our remote backend. You can refer to my previous article Remote backend with Terraform
Before we can create EKS we need to create VPC because EKS does not work well with the default VPC
What we need to create VPC
Public and private subnet
Internet gateway
NAT Gateway
Route Table
What we need to create EKS
iam role for eks cluster
iam role for node group
Terraform has 4 important areas
Provider - Determines the cloud provider (aws, azure, gcp etc)
Resources - Creates and manages infrastructure on the cloud platform.
Variable - Makes values reusable and configurable
output - Displays the results of created resources
Inside the VPC folder
- Create file with name main.tf
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.cluster_name}-vpc"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
}
resource "aws_subnet" "private" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
tags = {
Name = "${var.cluster_name}-private-${count.index + 1}"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
resource "aws_subnet" "public" {
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${var.cluster_name}-public-${count.index + 1}"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.cluster_name}-igw"
}
}
resource "aws_eip" "nat" {
count = length(var.public_subnet_cidrs)
domain = "vpc"
tags = {
Name = "${var.cluster_name}-nat-${count.index + 1}"
}
}
resource "aws_nat_gateway" "main" {
count = length(var.public_subnet_cidrs)
allocation_id = aws_eip.nat[count.index].id
subnet_id = aws_subnet.public[count.index].id
tags = {
Name = "${var.cluster_name}-nat-${count.index + 1}"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "${var.cluster_name}-public"
}
}
resource "aws_route_table" "private" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main[count.index].id
}
tags = {
Name = "${var.cluster_name}-private-${count.index + 1}"
}
}
resource "aws_route_table_association" "private" {
count = length(var.private_subnet_cidrs)
subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private[count.index].id
}
resource "aws_route_table_association" "public" {
count = length(var.public_subnet_cidrs)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
- Create another file output.tf
output "vpc_id" {
description = "VPC ID"
value = aws_vpc.main.id
}
output "private_subnet_ids" {
description = "Private subnet IDs"
value = aws_subnet.private[*].id
}
output "public_subnet_ids" {
description = "Public subnet IDs"
value = aws_subnet.public[*].id
}
- Create another file variables.tf
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
}
variable "availability_zones" {
description = "Availability zones"
type = list(string)
}
variable "private_subnet_cidrs" {
description = "CIDR blocks for private subnets"
type = list(string)
}
variable "public_subnet_cidrs" {
description = "CIDR blocks for public subnets"
type = list(string)
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
}
Inside the EKS folder
- main.tf
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.cluster_name}-vpc"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
}
resource "aws_subnet" "private" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
tags = {
Name = "${var.cluster_name}-private-${count.index + 1}"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
resource "aws_subnet" "public" {
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${var.cluster_name}-public-${count.index + 1}"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.cluster_name}-igw"
}
}
resource "aws_eip" "nat" {
count = length(var.public_subnet_cidrs)
domain = "vpc"
tags = {
Name = "${var.cluster_name}-nat-${count.index + 1}"
}
}
resource "aws_nat_gateway" "main" {
count = length(var.public_subnet_cidrs)
allocation_id = aws_eip.nat[count.index].id
subnet_id = aws_subnet.public[count.index].id
tags = {
Name = "${var.cluster_name}-nat-${count.index + 1}"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "${var.cluster_name}-public"
}
}
resource "aws_route_table" "private" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main[count.index].id
}
tags = {
Name = "${var.cluster_name}-private-${count.index + 1}"
}
}
resource "aws_route_table_association" "private" {
count = length(var.private_subnet_cidrs)
subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private[count.index].id
}
resource "aws_route_table_association" "public" {
count = length(var.public_subnet_cidrs)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
- output.tf
output "vpc_id" {
description = "VPC ID"
value = aws_vpc.main.id
}
output "private_subnet_ids" {
description = "Private subnet IDs"
value = aws_subnet.private[*].id
}
output "public_subnet_ids" {
description = "Public subnet IDs"
value = aws_subnet.public[*].id
}
variables.tf
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
}
variable "availability_zones" {
description = "Availability zones"
type = list(string)
}
variable "private_subnet_cidrs" {
description = "CIDR blocks for private subnets"
type = list(string)
}
variable "public_subnet_cidrs" {
description = "CIDR blocks for public subnets"
type = list(string)
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
}
Terraform modules can not execute unless they are invoked. In order to invoke VPC and EKS module. On the root folder we will create the following files
- main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "demo-terraform-eks-state-s3-bucket"
key = "terraform.tfstate"
region = "us-west-2"
dynamodb_table = "terraform-eks-state-locks"
encrypt = true
}
}
provider "aws" {
region = var.region
}
module "vpc" {
source = "./modules/vpc"
vpc_cidr = var.vpc_cidr
availability_zones = var.availability_zones
private_subnet_cidrs = var.private_subnet_cidrs
public_subnet_cidrs = var.public_subnet_cidrs
cluster_name = var.cluster_name
}
module "eks" {
source = "./modules/eks"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnet_ids
node_groups = var.node_groups
}
- outputs.tf
output "cluster_endpoint" {
description = "EKS cluster endpoint"
value = module.eks.cluster_endpoint
}
output "cluster_name" {
description = "EKS cluster name"
value = module.eks.cluster_name
}
output "vpc_id" {
description = "VPC ID"
value = module.vpc.vpc_id
}
- variables.tf
variable "region" {
description = "AWS region"
type = string
default = "us-west-2"
}
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
default = "10.0.0.0/16"
}
variable "availability_zones" {
description = "Availability zones"
type = list(string)
default = ["us-west-2a", "us-west-2b", "us-west-2c"]
}
variable "private_subnet_cidrs" {
description = "CIDR blocks for private subnets"
type = list(string)
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
variable "public_subnet_cidrs" {
description = "CIDR blocks for public subnets"
type = list(string)
default = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
default = "my-eks-cluster"
}
variable "cluster_version" {
description = "Kubernetes version"
type = string
default = "1.30"
}
variable "node_groups" {
description = "EKS node group configuration"
type = map(object({
instance_types = list(string)
capacity_type = string
scaling_config = object({
desired_size = number
max_size = number
min_size = number
})
}))
default = {
general = {
instance_types = ["t3.medium"]
capacity_type = "ON_DEMAND"
scaling_config = {
desired_size = 2
max_size = 4
min_size = 1
}
}
}
}
On the root directory create
- main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "remote-backend-s3"
key = "terraform.tfstate"
region = "us-west-2"
dynamodb_table = "remote-backend-locks"
encrypt = true
}
}
provider "aws" {
region = var.region
}
module "vpc" {
source = "./modules/vpc"
vpc_cidr = var.vpc_cidr
availability_zones = var.availability_zones
private_subnet_cidrs = var.private_subnet_cidrs
public_subnet_cidrs = var.public_subnet_cidrs
cluster_name = var.cluster_name
}
module "eks" {
source = "./modules/eks"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnet_ids
node_groups = var.node_groups
}
Note:
The s3 bucket name that I used is the bucket name I used in my previous post on how to create remote backend and state lock with s3 and DynamoDB. You can check it out here Remote backend with Terraform
outputs.tf
output "cluster_endpoint" {
description = "EKS cluster endpoint"
value = module.eks.cluster_endpoint
}
output "cluster_name" {
description = "EKS cluster name"
value = module.eks.cluster_name
}
output "vpc_id" {
description = "VPC ID"
value = module.vpc.vpc_id
}
variables.tf
variable "region" {
description = "AWS region"
type = string
default = "us-west-2"
}
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
default = "10.0.0.0/16"
}
variable "availability_zones" {
description = "Availability zones"
type = list(string)
default = ["us-west-2a", "us-west-2b", "us-west-2c"]
}
variable "private_subnet_cidrs" {
description = "CIDR blocks for private subnets"
type = list(string)
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
variable "public_subnet_cidrs" {
description = "CIDR blocks for public subnets"
type = list(string)
default = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
default = "my-eks-cluster"
}
variable "cluster_version" {
description = "Kubernetes version"
type = string
default = "1.30"
}
variable "node_groups" {
description = "EKS node group configuration"
type = map(object({
instance_types = list(string)
capacity_type = string
scaling_config = object({
desired_size = number
max_size = number
min_size = number
})
}))
default = {
general = {
instance_types = ["t3.medium"]
capacity_type = "ON_DEMAND"
scaling_config = {
desired_size = 2
max_size = 4
min_size = 1
}
}
}
}
Go to the terminal the change directory into the root folder
- Initialize terraform
terraform init
- Validate the resources that will be created
terraform plan
- Create the resources in aws
terraform apply
- To connect to kubernetes cluster
aws eks update-kubeconfig --region region --name cluster-name
- To view kubernetes onfig
kubectl config view
- To get current kubernetes config
kubectl config current-context
- To switch between kubernetes cluster
kubectl config use-context
- Clean up the resources
terraform destroy
Subscribe to my newsletter
Read articles from Oshaba Samson directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Oshaba Samson
Oshaba Samson
I am a software developer with 5 years + experience. I have working on web apps ecommerce, e-learning, hrm web applications and many others