EKS RBAC with AWS IAM: A Step-by-Step Guide to Secure Kubernetes Access

Oshaba SamsonOshaba Samson
8 min read

Role Based Access Control (RBAC) is on of the principles of security. in order to keep a kubernetes cluster secure there are several layers of security which RBAC is one of them. You can check out my previous post on layers of security 7 Layers of K8s Security. This enhances the least privilege principle.

Objectives

  • Secure Access to EKS cluster

Prerequisites

  • A gitlab Account

  • AWS Account

  • Knowledge of Terraform

In this tutorial we will configure aws IAM role to have access to EKS cluster by assuming a role. The first thing we will do is to create users who will be able to access the cluster

  • Login to AWS

  • Type IAM role on the search bar

  • Select users and create a new user

  • Type the username

  • Select attach policies directly

  • Click on Next

  • Preview and Create user

  • Clone this repo

This repo contain terraform script that creates aws vps and eks

git clone https://gitlab.com/devops-tutorials2695221/terraform-eks-aws.git
cd terraform-eks-aws
touch .gitlab-ci.yml
  • Inside the kubernetes block after the creation of security group add this code

  manage_aws_auth_configmap = true
  aws_auth_roles = local.aws_k8s_role_mapping

manage_aws_auth_configmap is a configuration flag commonly used in Terraform EKS modules to control how Terraform manages aws-auth ConfigMap inside the EKS cluster. aws-auth ConfigMap is critical for allowing IAM users and roles to authenticate and access the Kubernetes cluster. It maps AWS IAM identities (users, roles) to Kubernetes RBAC users and groups.

  • Create a file iam-roles.tf and add the following code which will be used to map aws iam user to k8s users
locals {
  aws_k8s_role_mapping = [{
      rolearn = aws_iam_role.external-admin.arn
      username = "admin"
      groups = ["none"]
    },
    {
      rolearn = aws_iam_role.external-developer.arn
      username = "developer"
      groups = ["none"]
    }
  ]
}

This creates a local variable aws_k8s_role_mapping the one we assigned in main.tf , which contains an array of IAM role-to-Kubernetes user mappings.

resource "aws_iam_role" "external-admin" {
  name = "external-admin"

  assume_role_policy = jsonencode({
    ...
    Principal = {
      AWS = var.user_for_admin_role
    }
  })

  inline_policy {
    ...
    Action   = ["eks:DescribeCluster"]
  }
}

This defines an IAM role named external-admin:

  • Can be assumed by whatever IAM identity is passed via the variable var.user_for_admin_role (role ARN).

  • Policy allows the role to run eks:DescribeCluster — a basic EKS permission often needed for authentication.

This is the same with developer as well


resource "aws_iam_role" "external-developer" {
  name = "external-developer"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Sid    = ""
        Principal = {
          AWS = var.user_for_dev_role
        }
      }
    ]
  })

  inline_policy {
    name = "external-developer-policy"

    policy = jsonencode({
      Version = "2012-10-17"
      Statement = [
        {
          Action   = ["eks:DescribeCluster"]
          Effect   = "Allow"
          Resource = "*"
        }
      ]
    })
  }
}

Putting everything together

locals {
  aws_k8s_role_mapping = [{
      rolearn = aws_iam_role.external-admin.arn
      username = "admin"
      groups = ["none"]
    },
    {
      rolearn = aws_iam_role.external-developer.arn
      username = "developer"
      groups = ["none"]
    }
  ]
}

resource "aws_iam_role" "external-admin" {
  name = "external-admin"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Sid    = ""
        Principal = {
          AWS = var.user_for_admin_role
        }
      }
    ]
  })

  inline_policy {
    name = "external-admin-policy"

    policy = jsonencode({
      Version = "2012-10-17"
      Statement = [
        {       
          Action   = ["eks:DescribeCluster"]
          Effect   = "Allow"
          Resource = "*"
        }
      ]
    })
  }
}

resource "aws_iam_role" "external-developer" {
  name = "external-developer"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Sid    = ""
        Principal = {
          AWS = var.user_for_dev_role
        }
      }
    ]
  })

  inline_policy {
    name = "external-developer-policy"

    policy = jsonencode({
      Version = "2012-10-17"
      Statement = [
        {
          Action   = ["eks:DescribeCluster"]
          Effect   = "Allow"
          Resource = "*"
        }
      ]
    })
  }
}
  • Create a file kube-resource.tf

This configures the Kubernetes provider to authenticate to the EKS cluster using AWS IAM.

  • host: EKS API server endpoint.

  • cluster_ca_certificate: Decoded CA to trust the cluster.

  • exec: Uses AWS CLI to get a token (this is the standard way of authenticating to EKS).

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    args        = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

Creates a Kubernetes namespace called online-boutique.

resource "kubernetes_namespace" "online-boutique" {
  metadata {
    name = "online-boutique"
  }
}

The create developer’s role as read only like get, list, describe etc


resource "kubernetes_role" "namespace-viewer" {
  metadata {
    name = "namespace-viewer"
    namespace = "online-boutique"
  }

  rule {
    api_groups     = [""]
    resources      = ["pods", "services", "secrets", "configmap", "persistentvolumes"]
    verbs          = ["get", "list", "watch", "describe"]
  }

  rule {
    api_groups = ["apps"]
    resources  = ["deployments", "daemonsets", "statefulsets"]
    verbs      = ["get", "list", "watch", "describe"]
  }
}

This binds the permissions to the user developer that we created. This means developer can view resources in the online-boutique namespace, but can’t modify them.

resource "kubernetes_role_binding" "namespace-viewer" {
  metadata {
    name      = "namespace-viewer"
    namespace = "online-boutique"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "Role"
    name      = "namespace-viewer"
  }
  subject {
    kind      = "User"
    name      = "developer"
    api_group = "rbac.authorization.k8s.io"
  }
}

This is a ClusterRole, not bound to a specific namespace. It allows to have read-only access to all core resources in all namespaces

resource "kubernetes_cluster_role" "cluster_viewer" {
  metadata {
    name = "cluster-viewer"
  }

  rule {
    api_groups = [""]
    resources  = ["*"]
    verbs      = ["get", "list", "watch", "describe"]
  }
}

This binds the admin user with the cluster role

resource "kubernetes_cluster_role_binding" "cluster_viewer" {
  metadata {
    name = "cluster-viewer"
  }

  role_ref {
    kind     = "ClusterRole"
    name     = "cluster-viewer"
    api_group = "rbac.authorization.k8s.io"
  }

  subject {
    kind      = "User"
    name      = "admin"
    api_group = "rbac.authorization.k8s.io"
  }
}

Putting everything together

provider "kubernetes" {
  host = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command = "aws"
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

resource "kubernetes_namespace" "online-boutique" {
  metadata {
    name = "online-boutique"
  }
}

resource "kubernetes_role" "namespace-viewer" {
  metadata {
    name = "namespace-viewer"
    namespace = "online-boutique"
  }

  rule {
    api_groups     = [""]
    resources      = ["pods", "services", "secrets", "configmap", "persistentvolumes"]
    verbs          = ["get", "list", "watch", "describe"]
  }

  rule {
    api_groups = ["apps"]
    resources  = ["deployments", "daemonsets", "statefulsets"]
    verbs      = ["get", "list", "watch", "describe"]
  }
}

resource "kubernetes_role_binding" "namespace-viewer" {
  metadata {
    name      = "namespace-viewer"
    namespace = "online-boutique"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "Role"
    name      = "namespace-viewer"
  }
  subject {
    kind      = "User"
    name      = "developer"
    api_group = "rbac.authorization.k8s.io"
  }
}

resource "kubernetes_cluster_role" "cluster_viewer" {
  metadata {
    name = "cluster-viewer"
  }

  rule {
    api_groups = [""]
    resources  = ["*"]
    verbs      = ["get", "list", "watch", "describe"]
  }
}

resource "kubernetes_cluster_role_binding" "cluster_viewer" {
  metadata {
    name = "cluster-viewer"
  }

  role_ref {
    kind     = "ClusterRole"
    name     = "cluster-viewer"
    api_group = "rbac.authorization.k8s.io"
  }

  subject {
    kind      = "User"
    name      = "admin"
    api_group = "rbac.authorization.k8s.io"
  }
}
  • Go to aws IAM and open users

  • Click on k8s-admin

  • Copy the arn

  • Go to gitlab settings and create variables for k8s-admin and k8s-developer. If you don’t know how to do that you can checkout my previous post on how to create a variable How to Securely Store and Use Variables in GitLab Pipelines.

  • For admin user user_for_admin_role as key and k8s-admin arn as value

  • For developer user user_for_developer_role as key and k8s-developer arn as value

Setup OIDC

Check out my previous article Step-by-Step Guide to Setting Up AWS OIDC for Secure CI/CD Integration

gitlab-ci

  • create a file .gitlab-ci.yml

  • Define the stages


stages:
- init
- build
- deploy
- cleanup
  • We need an image that has terraform so can be able to run terraform scripts
image:
  name: hashicorp/terraform:1.7
  entrypoint: [""]
  • Initialize terraform script
init:
  id_tokens:
    GITLAB_OIDC_TOKEN:
      aud: https://gitlab.com
  stage: init
  before_script:
  # install aws cli
  - apk --no-cache add curl python3 py3-pip
  - pip3 install --no-cache-dir awscli --break-system-packages

  # establish connection with AWS to get access credentials
  - >
    export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"
    $(aws sts assume-role-with-web-identity
    --role-arn ${ROLE_ARN}
    --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
    --web-identity-token ${GITLAB_OIDC_TOKEN}
    --duration-seconds 3600
    --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
    --output text))
  - aws sts get-caller-identity
  script:
    - terraform init
  artifacts:
    paths:
      - .terraform/
  • Run terraform plan
build:
  id_tokens:
    GITLAB_OIDC_TOKEN:
      aud: https://gitlab.com
  stage: build
  before_script:
  # install aws cli
  - apk --no-cache add curl python3 py3-pip
  - pip3 install --no-cache-dir awscli --break-system-packages

  # establish connection with AWS to get access credentials
  - >
    export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"
    $(aws sts assume-role-with-web-identity
    --role-arn ${ROLE_ARN}
    --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
    --web-identity-token ${GITLAB_OIDC_TOKEN}
    --duration-seconds 3600
    --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
    --output text))
  - aws sts get-caller-identity
  script:
  - terraform plan -out "planfile"
  artifacts:
    paths:
      - planfile
  • Apply terraform script
deploy:
  id_tokens:
    GITLAB_OIDC_TOKEN:d
      aud: https://gitlab.com
  stage: deploy
  before_script:
  # install aws cli
  - apk --no-cache add curl python3 py3-pip
  - pip3 install --no-cache-dir awscli --break-system-packages

  # establish connection with AWS to get access credentials
  - >
    export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"
    $(aws sts assume-role-with-web-identity
    --role-arn ${ROLE_ARN}
    --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
    --web-identity-token ${GITLAB_OIDC_TOKEN}
    --duration-seconds 3600
    --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
    --output text))
  - aws sts get-caller-identity
  script:
  - terraform apply -input=false "planfile"
  • Delete Kubernetes cluster
cleanup:
  id_tokens:
    GITLAB_OIDC_TOKEN:
      aud: https://gitlab.com
  stage: cleanup
  before_script:
  # install aws cli
  - apk --no-cache add curl python3 py3-pip
  - pip3 install --no-cache-dir awscli --break-system-packages

  # establish connection with AWS to get access credentials
  - >
    export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"
    $(aws sts assume-role-with-web-identity
    --role-arn ${ROLE_ARN}
    --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
    --web-identity-token ${GITLAB_OIDC_TOKEN}
    --duration-seconds 3600
    --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
    --output text))
  - aws sts get-caller-identity
  script:
  - terraform destroy -auto-approve
  when: manual

Accessing EKS

Configure AWS access and secret key

For guidance on securely creating AWS IAM credentials, refer to my previous post: Best Practices for Creating AWS IAM Credentials Using the AWS Console.. When generating access and secret keys, ensure you create them for both the k8s-admin and k8s-developer users.

Once the access and secret keys are configured, run the following command on your local machine to connect to AWS using OIDC for the k8s-admin user. Repeat the same steps for the k8s-developer user.

install jq

eval $(aws sts assume-role \
  --role-arn "arn:aws:iam::00xxxxxxxxx:role/external-admin" \
  --role-session-name "k8SSession" \
  | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId) AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey) AWS_SESSION_TOKEN=\(.SessionToken)"')

Then login to kubernetes cluster. Update the kubeconfig

aws eks --region us-east-1 update-kubeconfig --name myapp-eks
0
Subscribe to my newsletter

Read articles from Oshaba Samson directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Oshaba Samson
Oshaba Samson

I am a software developer with 5 years + experience. I have working on web apps ecommerce, e-learning, hrm web applications and many others