How to Deploy and Manage Kubernetes on AWS using Kops with IAM Roles


Kubernetes (K8s) is the leading container orchestration platform, and Kops (Kubernetes Operations) is the best tool to set up production-grade clusters on AWS.

In this guide, we will deploy Kubernetes on AWS using Kops with IAM Role authentication instead of static IAM users. We'll also cover the cleanup process to remove the cluster when needed.


Prerequisites

1. AWS Account

Ensure you have an AWS account with administrative permissions.

2. Register a Domain Name

Kops requires a fully qualified domain name (FQDN) to manage your Kubernetes cluster.

  • Recommended: Use AWS Route 53 to manage your domain.

  • Or, use an external registrar and configure Route 53 manually.

Example domain: techfusion.life (Kops will manage subdomains like api.techfusion.life).

3. Create an S3 Bucket for Kops State Store

Kops needs an S3 bucket to store cluster configurations.

bash aws s3api create-bucket --bucket techfusion.life --region us-east-1 aws s3api put-bucket-versioning --bucket techfusion.life --versioning-configuration Status=Enabled

4. Configure Route 53 for DNS

  1. Open AWS Console → Navigate to Route 53.

  2. Click Hosted ZonesCreate a new Hosted Zone for techfusion.life.

  3. If using an external domain, update its NS (Name Server) Records with values from Route 53.


Setting Up IAM Role for Kops

1. Create an IAM Role

  1. Open AWS IAM Console → Click RolesCreate Role.

  2. Choose AWS Service → Select EC2 → Click Next.

  3. Attach the following policies: bash AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess AmazonSQSFullAccess AmazonEventBridgeFullAccess

  4. Click Next, name the role Kops-Role, and create it.

2. Attach IAM Role to EC2 Instance

  1. Go to AWS EC2 Console → Select your EC2 instance.

  2. Click ActionsSecurityModify IAM Role.

  3. Select Kops-Role and update it.

3. Verify IAM Role Permissions

On the EC2 instance, run: bash curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ aws sts get-caller-identity aws s3 ls

If these commands return output, IAM Role is attached properly.


Creating a Kubernetes Cluster Using Kops

1. Install Required Tools

bash sudo yum update -y # Amazon Linux sudo apt update -y # Ubuntu/Debian

Install tools

sudo yum install -y jq net-tools unzip tree sudo apt install -y jq net-tools unzip tree

2. Install AWS CLI, Kops, and Kubectl

bash

AWS CLI

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install aws --version

Kops

curl -LO https://github.com/kubernetes/kops/releases/latest/download/kops-linux-amd64 chmod +x kops-linux-amd64 sudo mv kops-linux-amd64 /usr/local/bin/kops kops version

Kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin/kubectl kubectl version --client

3. Generate SSH Keys for Cluster Access

bash ssh-keygen -t rsa -b 4096 -C "kops-cluster" -f ~/.ssh/id_rsa

4. Set Environment Variables

bash export KOPS_CLUSTER_NAME=techfusion.life export KOPS_STATE_STORE=s3://techfusion.life

To persist them: bash echo 'export KOPS_CLUSTER_NAME=techfusion.life' >> ~/.bashrc echo 'export KOPS_STATE_STORE=s3://techfusion.life' >> ~/.bashrc source ~/.bashrc

5. Create the Kubernetes Cluster

bash kops create cluster --name=techfusion.life
--state=s3://techfusion.life
--zones=us-east-1a,us-east-1b
--node-count=2
--control-plane-count=1
--node-size=t3.medium
--control-plane-size=t3.medium
--control-plane-zones=us-east-1a
--control-plane-volume-size=20
--node-volume-size=10
--ssh-public-key=~/.ssh/id_rsa.pub
--dns-zone=techfusion.life
--networking=calico
--yes

6. Validate the Cluster

bash kops validate cluster --state=s3://techfusion.life kubectl get nodes


Deleting the Kubernetes Cluster

1. Delete the Cluster

bash kops delete cluster --name=$KOPS_CLUSTER_NAME --state=$KOPS_STATE_STORE --yes

2. Verify Deletion

bash kops get cluster --state=$KOPS_STATE_STORE kubectl get nodes

3. Clean Up Resources

bash

Delete S3 Bucket

aws s3 rb s3://techfusion.life --force

Delete Route 53 Hosted Zone

aws route53 delete-hosted-zone --id

Delete IAM Role

aws iam delete-role --role-name Kops-Role

Delete VPC

aws ec2 delete-vpc --vpc-id


Conclusion

You have successfully deployed and deleted a Kubernetes cluster using Kops with IAM Role authentication on AWS! 🚀🎉
This method ensures a secure, scalable, and automated Kubernetes setup without relying on static IAM credentials.

💬 Have questions or improvements? Drop a comment below! 🚀

0
Subscribe to my newsletter

Read articles from Chinnayya Chintha directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Chinnayya Chintha
Chinnayya Chintha

I am 𝗖𝗵𝗶𝗻𝗻𝗮𝘆𝘆𝗮 𝗖𝗵𝗶𝗻𝘁𝗵𝗮, 𝗮 𝗿𝗲𝘀𝘂𝗹𝘁𝘀-𝗱𝗿𝗶𝘃𝗲𝗻 𝗦𝗶𝘁𝗲 𝗥𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿 (𝗦𝗥𝗘) with proven expertise in 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗻𝗴, 𝗮𝗻𝗱 𝗺𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝘀𝗲𝗰𝘂𝗿𝗲, 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲, 𝗮𝗻𝗱 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀. My experience spans 𝗰𝗹𝗼𝘂𝗱-𝗻𝗮𝘁𝗶𝘃𝗲 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀, 𝗖𝗜/𝗖𝗗 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻, 𝗮𝗻𝗱 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗮𝘀 𝗖𝗼𝗱𝗲 (𝗜𝗮𝗖), enabling me to deliver 𝗵𝗶𝗴𝗵-𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗶𝗻𝗴 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 that enhance operational efficiency and drive innovation. As a 𝗙𝗿𝗲𝗲𝗹𝗮𝗻𝗰𝗲 𝗦𝗶𝘁𝗲 𝗥𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿, I specialize in: ✅𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗶𝗻𝗴 𝘀𝗲𝗰𝘂𝗿𝗲 𝗮𝗻𝗱 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗽𝗮𝘆𝗺𝗲𝗻𝘁 𝗴𝗮𝘁𝗲𝘄𝗮𝘆 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀 𝘂𝘀𝗶𝗻𝗴 𝗔𝗪𝗦 𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀 𝗹𝗶𝗸𝗲 𝗔𝗣𝗜 𝗚𝗮𝘁𝗲𝘄𝗮𝘆, 𝗟𝗮𝗺𝗯𝗱𝗮, 𝗮𝗻𝗱 𝗗𝘆𝗻𝗮𝗺𝗼𝗗𝗕.. ✅𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗻𝗴 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗽𝗿𝗼𝘃𝗶𝘀𝗶𝗼𝗻𝗶𝗻𝗴 with 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺. ✅𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗶𝗻𝗴 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 using 𝗖𝗹𝗼𝘂𝗱𝗪𝗮𝘁𝗰𝗵. ✅Ensuring compliance with 𝗣𝗖𝗜-𝗗𝗦𝗦 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 through 𝗲𝗻𝗰𝗿𝘆𝗽𝘁𝗶𝗼𝗻 𝗺𝗲𝗰𝗵𝗮𝗻𝗶𝘀𝗺𝘀 ✅implemented with 𝗔𝗪𝗦 𝗞𝗠𝗦 and 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗠𝗮𝗻𝗮𝗴𝗲𝗿. These efforts have resulted in 𝗲𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝘁𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻 𝗿𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 and 𝘀𝘁𝗿𝗲𝗮𝗺𝗹𝗶𝗻𝗲𝗱 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 for payment processing systems. I am passionate about 𝗺𝗲𝗻𝘁𝗼𝗿𝗶𝗻𝗴 𝗮𝗻𝗱 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝘀𝗵𝗮𝗿𝗶𝗻𝗴, having delivered 𝗵𝗮𝗻𝗱𝘀-𝗼𝗻 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 in 𝗰𝗹𝗼𝘂𝗱 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀, 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀, 𝗮𝗻𝗱 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻. My proactive approach helps me anticipate system challenges and create 𝗿𝗼𝗯𝘂𝘀𝘁, 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀 𝘁𝗵𝗮𝘁 𝗲𝗻𝗵𝗮𝗻𝗰𝗲 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆, 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲, 𝗮𝗻𝗱 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆. Dedicated to 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴, I stay updated with 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀 and thrive on contributing to 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀 that push boundaries in technology.