AWS CodePipeline now includes native support for deploying to Amazon EKS.

Previously, when using AWS CodePipeline for Amazon EKS deployment, you had to create a CodeBuild project to install kubectl or helm and manage the deployment. Now, with this recent update, AWS CodePipeline natively supports EKS deployment. The best part is that it supports both public and private cluster endpoints. For EKS clusters with private endpoints, you don't need to make any network changes; providing the necessary permissions to the CodePipeline service role is sufficient.

Official announcement: https://aws.amazon.com/about-aws/whats-new/2025/02/aws-codepipeline-native-amazon-eks-deployment-support/

In this article, I will cover not only the Deploy stage but also discuss an end-to-end pipeline. Feel free to skip the Build section if you are already familiar with those services.

Let's begin with setting up the EKS Cluster

We need an EKS Cluster to test this out. Also, a note on; if the API server endpoint access is private, ensure the private subnets should be able to access internet via a NAT Gateway. Therefore, make sure the Route tables are configured correctly.

In this Demo, my EKS Cluster API server endpoint access is private, I have created an inbound rule in the Cluster security group

You can follow my blog to create an EKS cluster with Auto Mode within few minutes.

Creating the ECR Repository

For this demo, we need a container registry to store the container image that we will build during the build stage. Later, we will deploy this image to the EKS cluster. You can use the following simple Terraform script to create the registry or create it directly through the console.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

resource "aws_ecr_repository" "doggo_app_ecr" {
  name                 = "doggo-app-ecr"
  image_tag_mutability = "MUTABLE"
  image_scanning_configuration {
    scan_on_push = true
  }
}

output "ecr_repository_uri" {
  value       = aws_ecr_repository.doggo_app_ecr.repository_url
  description = "The URI of the ECR repository"
}

AWS CodePipeline Creation and Configuration

Create a Pipeline from the scratch, follow the steps as per to my screenshots.

Creation option, select Build custom pipeline.

For this demo, under the Service role, create a New service role. Since we are using an EKS cluster with a private API server endpoint, we will need to update this roles policy later to allow access to read the EKS cluster's private subnets. You can proceed for now; we will handle this step later.

A quick note before we start the source stage: I have created this repository https://github.com/awsfanboy/doggo-container-app with all the resources needed for this demo. Make sure to update the values.yaml file with the image URL.

This Helm chart is ready for an EKS Auto Mode cluster. If your cluster is not enabled for EKS Auto Mode, you will need to install add-ons like the ALB Ingress Controller.

image:
  repository: { Replace with your ECR repository URL}
 # example,  repository: 11111111111.dkr.ecr.ap-southeast-2.amazonaws.com/doggo-ap-ecr
  tag: latest
  pullPolicy: Always

Set up the source stage by connecting your GitHub repository or any other repository you plan to use.

In the build stage, choose Other build providers and select AWS ECR. Then, select the ECR repository name from the dropdown menu.

Skip the Test stage for this demo and proceed to the deploy stage.

So far, what we've done isn't new, and I'm sure you're familiar with the previous steps. However, I wanted to explain each step for this demo.

Deploy stage, let’s check these fields now.

  • Select the deploy provider as Amazon EKS.

  • Since we have already created an EKS cluster in the same region, you can select the cluster from the dropdown.

  • For Deployment configuration type, choose either helm or kubectl. In this demo, we are using a Helm chart to deploy.

  • Provide the release name and Helm chart location.

Thats all for this and create the pipeline.

The pipeline will fail because there are more steps to follow later, but don't worry about it. Once we complete the setup, we can rerun the pipeline.

Cluster access from CodePipeline

This is really an important part, let’s breakdown the changes that we are going to do.

  • Since we are using an EKS cluster with a private API server endpoint, we need to update the existing CodePipeline role to grant permissions for ec2:CreateNetworkInterface, ec2:CreateNetworkInterfacePermission, ec2:DeleteNetworkInterface, and some ec2:Describe permissions.

If your cluster endpoint is public, you can skip this step.

  • The CodePipeline service role needs to authenticate and authorize access permissions through EKS access entries. Additionally, the CodePipeline service role should have the eks:DescribeCluster permission to access the cluster.

Update the CodePipeline service role

Find your CodePipeline service role from the IAM roles and edit the existing role.

Update the existing policy by adding the following policy document. Make sure to replace the Subnet ARNs and EKS Cluster ARN.

You can find the subnets in the Networking tab of the EKS cluster console.


            "Sid": "EksClusterPolicy",
            "Effect": "Allow",
            "Action": "eks:DescribeCluster",
            "Resource": "arn:aws:eks:ap-southeast-2:111111111111:cluster/awsfanboy-dev"
        },
        {
            "Sid": "EksVpcClusterPolicy",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeVpcs"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "ec2:CreateNetworkInterface",
            "Resource": "*",
            "Condition": {
                "StringEqualsIfExists": {
                     "ec2:Subnet": [
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-0c147f0e842c43726",
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-068e82a6b56cc1742",
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-0b12ab5e6baf6028d"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "ec2:CreateNetworkInterfacePermission",
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "ec2:Subnet": [
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-0c147f0e842c43726",
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-068e82a6b56cc1742",
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-0b12ab5e6baf6028d"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DeleteNetworkInterface",
            "Resource": "*",
            "Condition": {
                "StringEqualsIfExists": {
                     "ec2:Subnet": [
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-0c147f0e842c43726",
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-068e82a6b56cc1742",
                        "arn:aws:ec2:ap-southeast-2:111111111111:subnet/subnet-0b12ab5e6baf6028d"
                    ]
                }
            }
        }

Update the CodePipeline service role to access ECR.

Add the AmazonEC2ContainerRegistryPowerUser AWS managed policy to the CodePipeline service role so that AWS ECR build provider can push the container image to the ECR.

EKS Cluster Authentication and Authorization

We can create this easily via EKS access entries in the EKS Cluster console

Create access entry

Search for your CodePipeline service role and select it, then keep the type as Standard and click Next.

Now we need to attach the policy to authorize permissions for the CodePipeline service role. For this demo, I am using AmazonEKSClusterAdminPolicy. Keep the Access scope as Cluster, then click Add policy, followed by clicking Next to create the access entry.

That’s it now we can go ahead and rerun the pipeline.

Run the Pipeline

Navigate to the pipeline and click release pipeline.

If all good, you will see your pipeline executed successfully.

You can expand the deploy action and check the logs.

Verify the deployment

Since this is a private cluster, I am using the CloudShell with a VPC environment , where which is created in the same VPC where I have deployed the EKS cluster. So I can run kubectl and helm commands. We don’t have to install kubectl but helm. Once done update the kubeconfig

~ $ aws eks update-kubeconfig --name awsfanboy-dev

Install helm:

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

List the Helm release and get the ingress URL.

You can also list the Ingress URL from the EKS console by navigating to the Resources tab and selecting the Ingress object.

Copy the Load Balancer URL and access it from your browser. You should see that our Doggo App Version 1 is running.

Eager to learn and curious about the deployment action for a private cluster, I looked at the CloudTrail logs and found the trail logs where userIdentity from CodePipeline appeared for NetworkInterface events.

Conclusion

Previously, for EKS deployment, we had to use CodeBuild with a lot of configurations to manage Kubernetes deployments. But now, with native CodePipeline support, we can easily and directly deploy to EKS clusters. My favourite part is that we can deploy to an EKS cluster with a private endpoint without any customizations, except for the IAM policy, which is really awesome.

If you have any questions or comments, please let me know.

1
Subscribe to my newsletter

Read articles from Arshad Zackeriya directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Arshad Zackeriya
Arshad Zackeriya

AWS Hero 📍 🇳🇿🥝 | Enabling DevOps 👨‍💻 | ☁️ AWS Fanboy