Observability by using OpenTelemetry in AWS EKS Cluster.

Md Nur MohammadMd Nur Mohammad
5 min read

Step 1: AWS EKS setup

  1. Create a standard EC2 instance (t2.micro is also acceptable) to establish the entry point for the EKS cluster.

  2. Install AWS CLI v2

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin --update
  1. IAM Roles

    Create a user “eks-admin” with AdministratorAccess

    Create Security Credentials Access Key and Secret access key

  2. AWS configurations

     aws configure
     #Output
     AWS Access Key ID [None]: AKIARTCYZNBOXFDCB3BQ # copy access key from IAM Security Credentials and peste it here
     AWS Secret access key [None]: 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 # copy access key from IAM Secret access key and peste it here
     Default region name [None]: us-east-1 # You take anyone region where your eks cluster will create.
     Default output format [None]: # keep it blank as like defualt.
     # To check configuration you can run 
     aws s3 ls
    
  3. Kubernetes tools se-tup

    Install kubectl

     curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
     chmod +x ./kubectl
     sudo mv ./kubectl /usr/local/bin
     kubectl version --short --client
    

    Install eksctl

     curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
     sudo mv /tmp/eksctl /usr/local/bin
     eksctl version
    
  4. EKS cluster Set-up

     # You can change cluster name region type and node number according to your needs.
     eksctl create cluster --name three-tier-cluster --region us-west-2 --node-type t2.medium --nodes-min 2 --nodes-max 2
     #To update cluster run the below command
     aws eks update-kubeconfig --region us-west-2 --name three-tier-cluster
     kubectl get nodes
    

    It will take 10-15 minutes

    Once the cluster is ready, you will have access to the cluster.

  5. Get and run the OTel demo

    1. Clone the Demo repository:

       git clone https://github.com/open-telemetry/opentelemetry-demo.git
      
    2. Change to the demo folder:

       cd opentelemetry-demo/
      
  1. Install helm

     curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
     helm version
    
  2. Add the OpenTelemetry Helm repository:

     helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    

    To install the chart with the release name my-otel-demo, run the following command:

     helm install my-otel-demo open-telemetry/opentelemetry-demo
    
  3. Install using kubectl

    The following command will install the demo application to your Kubernetes cluster.

    kubectl create --namespace otel-demo -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-demo/main/kubernetes/opentelemetry-demo.yaml
    
  4. Expose services using kubectl port-forward

    To expose the frontend-proxy service, use the following command (replace default with your Helm chart release namespace accordingly):

    1.  kubectl --namespace otel-demo port-forward svc/frontend-proxy 8080:808080
      
  1. Access App from Local Browser Using SSH Port Forwarding

    Go to your local command line and connect SSH client

    Note: Keep port forwarding into your EC2 instance (entry point of EKS)

    ssh -i "otel-k8s.pem" -L 8080:localhost:8080 ubuntu@ec2-107-22-112-101.compute-1.amazonaws.com
    

    Now, go to your local browser and run the following command to get application access

    #With the frontend-proxy port-forward set up, you can access:
    
    #Web store: 
    http://localhost:8080/
    #Grafana: 
    http://localhost:8080/grafana/
    #Load Generator UI: 
    http://localhost:8080/loadgen/
    #Jaeger UI: 
    http://localhost:8080/jaeger/ui/
    #Flagd configurator UI:
    http://localhost:8080/feature
    
  2. Summary of What’s Happening:

    Local Browser (localhost:8080)
           ↓
    Local SSH Client Tunnel
           ↓
    EC2 Instance (localhost:8080)
           ↓
    Kubernetes Port-Forward
           ↓
    Service: frontend-proxy (ClusterIP)
           ↓
    Pod running frontend app
    
  3. Want to Clean Up?

    To avoid extra cost, you can:

    1. Delete the Helm release:

       helm uninstall opentelemetry-demo -n otel-demo
      
  1. Delete EKS Cluster Using AWS CLI

    Step 1: List your EKS clusters

    aws eks list-clusters --region <your-region>
    

    My output:

    Find Your Node Group Name

    If you're unsure of your node group name, run:

    aws eks list-nodegroups \
      --cluster-name otel-obsarvability \
      --region us-east-1
    

    Step 2: Delete the Node Group(s) First

    aws eks delete-nodegroup \
      --cluster-name <your-cluster-name> \
      --nodegroup-name <your-node-group-name> \
      --region <your-region>
    

    (Repeat if you have multiple node groups.)

    Output will show likes:

  2. ubuntu@ip-172-31-2-215:~/opentelemetry-demo/kubernetes$ aws eks delete-nodegroup \
      --cluster-name otel-obsarvability \
      --nodegroup-name ng-e748e373 \
      --region us-east-1
    {
        "nodegroup": {
            "nodegroupName": "ng-e748e373",
            "nodegroupArn": "arn:aws:eks:us-east-1:109707880541:nodegroup/otel-obsarvability/ng-e748e373/06cb9d1e-433e-14a5-22d6-40393eaf6eec",
            "clusterName": "otel-obsarvability",
            "version": "1.32",
            "releaseVersion": "1.32.3-20250519",
            "createdAt": "2025-06-04T09:18:56.085000+00:00",
            "modifiedAt": "2025-06-04T10:48:15.666000+00:00",
            "status": "DELETING",
            "capacityType": "ON_DEMAND",
            "scalingConfig": {
                "minSize": 2,
                "maxSize": 2,
                "desiredSize": 2
            },
            "instanceTypes": [
                "t2.large"
            ],
            "subnets": [
                "subnet-0067ad16a30aefd88",
                "subnet-0ae45dd458f109e06"
            ],
            "amiType": "AL2_x86_64",
            "nodeRole": "arn:aws:iam::109707880541:role/eksctl-otel-obsarvability-nodegrou-NodeInstanceRole-zo39hK3bhwij",
            "labels": {
                "alpha.eksctl.io/cluster-name": "otel-obsarvability",
                "alpha.eksctl.io/nodegroup-name": "ng-e748e373"
            },
            "resources": {
                "autoScalingGroups": [
                    {
                        "name": "eks-ng-e748e373-06cb9d1e-433e-14a5-22d6-40393eaf6eec"
                    }
                ]
            },
            "health": {
                "issues": []
            },
            "updateConfig": {
                "maxUnavailable": 1
            },
            "launchTemplate": {
                "name": "eksctl-otel-obsarvability-nodegroup-ng-e748e373",
                "version": "1",
                "id": "lt-0e6b756279938aefe"
            },
            "tags": {
                "aws:cloudformation:stack-name": "eksctl-otel-obsarvability-nodegroup-ng-e748e373",
                "alpha.eksctl.io/cluster-name": "otel-obsarvability",
                "alpha.eksctl.io/nodegroup-name": "ng-e748e373",
                "aws:cloudformation:stack-id": "arn:aws:cloudformation:us-east-1:109707880541:stack/eksctl-otel-obsarvability-nodegroup-ng-e748e373/e34148e0-4124-11f0-aaf0-12d2e14b57f3",
                "eksctl.cluster.k8s.io/v1alpha1/cluster-name": "otel-obsarvability",
                "aws:cloudformation:logical-id": "ManagedNodeGroup",
                "alpha.eksctl.io/nodegroup-type": "managed",
                "alpha.eksctl.io/eksctl-version": "0.208.0"
            }
        }
    }
    (END)
    

    You’ve now triggered the deletion of the EKS managed node group:

    "status": "DELETING"
    

    This will:

    • Terminate the associated EC2 instances (2 t2.large nodes in your case)

    • Clean up the Auto Scaling Group

    • Remove the node group from your cluster

Wait for Node Group Deletion to Finish

It typically takes 2–5 minutes. You can monitor:

    aws eks describe-nodegroup \
      --cluster-name otel-obsarvability \
      --nodegroup-name ng-e748e373 \
      --region us-east-1

You will see two (node) EC2 instances deleted.

Step 3: Delete the Cluster

    aws eks delete-cluster \
      --name <your-cluster-name> \
      --region <your-region>

The output will look like:

    ubuntu@ip-172-31-2-215:~$ aws eks delete-cluster \
      --name otel-obsarvability \
      --region us-east-1
    {
        "cluster": {
            "name": "otel-obsarvability",
            "arn": "arn:aws:eks:us-east-1:109707880541:cluster/otel-obsarvability",
            "createdAt": "2025-06-04T09:08:54.301000+00:00",
            "version": "1.32",
            "endpoint": "https://6011B26C6A5FD5284F0FC61B09C7E2EF.gr7.us-east-1.eks.amazonaws.com",
            "roleArn": "arn:aws:iam::109707880541:role/eksctl-otel-obsarvability-cluster-ServiceRole-xAovhMoAvLmE",
            "resourcesVpcConfig": {
                "subnetIds": [
                    "subnet-0067ad16a30aefd88",
                    "subnet-0ae45dd458f109e06",
                    "subnet-04388047aedd4a2fc",
                    "subnet-01414f6dbb7ac075f"
                ],
                "securityGroupIds": [
                    "sg-0be7fea6c256ba399"
                ],
                "clusterSecurityGroupId": "sg-0a3c0ab1b7b8f9dc6",
                "vpcId": "vpc-012786c5ae4c5407a",
                "endpointPublicAccess": true,
                "endpointPrivateAccess": false,
                "publicAccessCidrs": [
                    "0.0.0.0/0"
                ]
            },
            "kubernetesNetworkConfig": {
                "serviceIpv4Cidr": "10.100.0.0/16",
                "ipFamily": "ipv4",
                "elasticLoadBalancing": {
                    "enabled": false
                }
            },
            "logging": {
                "clusterLogging": [
                    {
                        "types": [
                            "api",
                            "audit",
                            "authenticator",
                            "controllerManager",
                            "scheduler"
                        ],
                        "enabled": false
                    }
                ]
            },
            "identity": {
                "oidc": {
                    "issuer": "https://oidc.eks.us-east-1.amazonaws.com/id/6011B26C6A5FD5284F0FC61B09C7E2EF"
                }
            },
            "status": "DELETING",
            "certificateAuthority": {
                "data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQm95RXV6UUFPU1F3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBMk1EUX
    dPVEE0TlRWYUZ3MHpOVEEyTURJd09URXpOVFZhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNyeG9oaWNyNnlZcFlacm9GSXJ2c284ODJvT21uVHU3Nmt5a
    TF2M3lwa1VFWG4wbUlQVzFaMlNiOFcKTWRSazZHSmdtWVdlT1Vtc0Q0aElXYzZaWnFvcXBpNFZ1OGt3cENSWS8yUTgvb3E3KzFDYTZUMXdJdHhmdlMwTAplcEZHaEJqYnRMTlFvQzJMYnFIaFN6TW1hRGlSMW1wU1g3b1JHV2k5b1EyWUxNSzd1VWg5
    L3ZyVExDWFNXUkp0CmZ2bkptYnk2WCtmY0wvRWlxTDA5NE9ENVB3bjc1Si91VFhDVUsxblZsZitSNDhhRnlpVTlZTWtYNmpFY092bTkKYk16Y01WTWFsUmNoYmluemZjSjJSVEEvSTY5OEFsQ2x0QWI5TURrNjhPM0VvUjhLTnk1ZkhhbmlSK2tYWjR
    mawpobG9vZ2g3c0ROYjFzcUN3Tkd3OUh2NHEzWjVCQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRSG10c0s0QmhoOFFmOWpCakJKeDZ2cVhzZ1VqQVYKQmdOVkhSRU
    VEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQkhvNlVrQnRLNApDVktyNUZXTFdTbExrMktQOXJZeVE5SEJsUkdJdFg4YkZiUHp0eDJROXV6WlA1Zjh4dWpFRTFtalRhTG1MV1h0CkdKd1p3WWNnd3NVd1loUjQ5d
    3VNOGpMV2k1dzZaUjZpaG96NUJoQ3pkRDlUMyt0K1dwdy9MQzN1Vm52L09jZ1kKeExwRjlJWTI1dzdRY0o3TEZ6SGRWL0RFa1VRQXR1Znk2VDQvUFlwZHlKWDJxYit4OW4wYWpFTFJBcWpJYlVGUAp2VnJRTElsV0RFVzhVZFJuelE3djNkUktDZjB4
    UUtSRktCRTdyUnZqUDZBUHdFWGtGalVFUUFCRzd3RW5pQkh6ClhYa3VwRi9temdKUVM2OHdlSENvSUFxcnJvNFYwd3F0amdvYUNac3ZVcGhHbUhHM0xEd2NlQnRsTzRNWXdKcmgKTnV2S3Q1SEdtQnloCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0
    K"
            },
            "platformVersion": "eks.11",
            "tags": {
                "aws:cloudformation:stack-name": "eksctl-otel-obsarvability-cluster",
                "alpha.eksctl.io/cluster-name": "otel-obsarvability",
                "aws:cloudformation:stack-id": "arn:aws:cloudformation:us-east-1:109707880541:stack/eksctl-otel-obsarvability-cluster/7b4537c0-4123-11f0-8b64-123c9d6d2225",
                "eksctl.cluster.k8s.io/v1alpha1/cluster-name": "otel-obsarvability",
                "alpha.eksctl.io/cluster-oidc-enabled": "false",
                "aws:cloudformation:logical-id": "ControlPlane",
                "alpha.eksctl.io/eksctl-version": "0.208.0",
                "Name": "eksctl-otel-obsarvability-cluster/ControlPlane"
            },
            "accessConfig": {
                "authenticationMode": "API_AND_CONFIG_MAP"
            },
            "upgradePolicy": {
                "supportType": "EXTENDED"
            }
        }
    }
    (END)

If you go to your EKS cluster on the AWS console, then you will see that the Cluster is being deleted.

Wait Until Deletion Completes

  • Cluster deletion usually takes 3–5 minutes

  • You can verify it's deleted with:

    aws eks describe-cluster \
      --name otel-obsarvability \
      --region us-east-1

The output will look like this.

  1. Delete Using AWS Console

    1. Go to EKS Dashboard → Select your cluster

    2. First delete the Managed Node Groups

    3. Then delete the EKS Cluster

    4. Go to EC2 Dashboard → terminate any remaining instances

    5. Optional: Delete the VPC used for the cluster (from VPC Console)

0
Subscribe to my newsletter

Read articles from Md Nur Mohammad directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Md Nur Mohammad
Md Nur Mohammad

I am pursuing a Master's in Communication Systems and Networks at the Cologne University of Applied Sciences, Germany.