EKS Auto Mode: A Comprehensive Study Guide


This guide will help you understand the core concepts, features, and differences of Amazon EKS Auto Mode, also known as EKS Auto.
I. Core Concepts & Overview
• What is EKS Auto Mode? EKS Auto is an exciting new feature of Amazon Elastic Kubernetes Service (EKS) designed to significantly reduce the operational overhead of managing Kubernetes clusters.
• Why was EKS Auto created? The primary motivation behind EKS Auto is to allow users to focus on running their applications on EKS without worrying about the underlying management of worker nodes, AMIs, and core add-ons, which traditionally required significant manual effort.
• Traditional EKS Cluster Management (Pre-EKS Auto):
◦ Control Plane: Managed by AWS.
◦ Worker Nodes: Users manage worker nodes (e.g., EC2 instances), including AMIs, patching, and upgrading.
◦ Add-ons: Users install and manage core add-ons like Carpenter (autoscaler), Ingress, EBS driver, and CoreDNS on dedicated node groups. This involves manual upgrades when Kubernetes versions or add-on versions change.
◦ Upgrades: Upgrading a Kubernetes cluster involved upgrading the control plane, then manually testing and upgrading add-ons, and AMIs in the data plane.
• EKS Auto Cluster Architecture:
◦ Control Plane: Managed by AWS.
◦ Core Add-ons: AWS runs and manages core add-ons (Carpenter, Ingress, Storage) directly within the AWS account, integrated with the control plane. This ensures scalability, high availability, and security.
◦ Managed Instances (Worker Nodes): AWS fully manages the worker nodes, referred to as "managed instances." Users do not need to manage AMIs, patching, or upgrades. These are essentially EC2 instances under AWS's management.
◦ Embedded Components: Components like CoreDNS, Kube-proxy, and VPC CNI are not run as DaemonSets but are baked into the AMI as processes, optimizing resource usage.
II. Key Features of EKS Auto
• Automated Core Add-on Management: AWS manages and upgrades core add-ons directly with the control plane. If a new version of an add-on is available with a new EKS version, EKS Auto will upgrade it automatically. If not, it will remain as is. AWS ensures compatibility between add-on and EKS versions.
• Automated Worker Node Management: AWS manages the underlying EC2 instances (managed instances), including AMI updates, patching, and recycling.
◦ AMI Management: AWS manages the AMI (BottleRocket OS is mentioned as the base). Users don't need to worry about AMI version selection, patching, or hydration.
◦ Automated Node Recycling: Nodes are automatically recycled every 15 days (configurable to 21 days) in a rolling deployment fashion. This ensures the latest security patches are applied with zero touch.
• Automatic EC2 Right Sizing and Bin Packing:
◦ EKS Auto automatically selects the appropriate-sized EC2 instance based on pod specifications (CPU and memory requests).
◦ It constantly monitors and bins packs pods into the most cost-efficient configuration, terminating unused EC2s.
• Support for Various Instance Types: Unlike Fargate, EKS Auto, because it runs on managed EC2 instances, supports any EC2 instance type (e.g., compute-optimized, memory-optimized, GPU instances).
• Support for DaemonSets, Service Mesh, and Sidecars: Since the underlying infrastructure is managed EC2, EKS Auto supports DaemonSets, service mesh, and any tools that run on pods or sidecars.
• Cost Optimization:
◦ Automatic right-sizing and bin packing contribute to cost savings.
◦ Users can utilize Reserved Instances, Savings Plans, and Spot Instances with EKS Auto.
• Seamless Upgrades:
◦ When the control plane is upgraded, EKS Auto automatically upgrades managed add-ons.
◦ For the data plane, AWS automatically updates worker nodes in a rolling deployment fashion: new managed instances are created, existing nodes are cordoned and drained, and pods are moved to the new instances.
• Kubernetes Scheduling Constraint Respect: EKS Auto respects Kubernetes scheduling constraints such as Pod Disruption Budgets, node selectors, and node affinity.
III. EKS Auto vs. EKS Fargate
Feature | EKS Auto Mode | EKS Fargate |
Core Add-on Mgmt. | AWS manages core add-ons (Carpenter, Ingress, Storage) in the control plane. | AWS does not manage core add-ons; they are out of the control plane. |
Underlying Infra. | Runs on managed EC2 instances. | Runs on "Fargate instances." |
Pods per Instance | Multiple pods can run on a single managed EC2 instance. | Only one pod can run per Fargate instance. |
AMI Management | AWS manages AMI (BottleRocket OS), including updates. | Not applicable directly; Fargate abstracts the underlying compute. |
DaemonSet Support | Yes (because it's EC2). | No (due to one pod per instance limitation). |
Service Mesh Support | Yes. | No. |
Instance Type Choice | Yes, can choose any EC2 instance type (e.g., GPU, compute-optimized). | No, limited to Fargate's instance type capabilities (no GPU). |
Automatic Right Sizing | Yes, automatically right-sizes EC2 instances and bin packs pods. | No, if started with certain CPU/memory, it doesn't automatically reduce. |
Cost Saving Options | Supports Reserved Instances, Savings Plans, Spot Instances. | Primarily consumption-based; these options are not directly applicable in the same way. |
IV. Getting Started with EKS Auto
• EKS Auto will be visible as a new feature in the EKS console.
• Users can toggle EKS Auto when creating a new cluster via the console or CLI.
--------------------------------------------------------------------------------
Glossary of Key Terms
• EKS (Elastic Kubernetes Service): Amazon's managed Kubernetes service that simplifies running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
• EKS Auto Mode (EKS Auto): An exciting new feature of EKS that automates the management of worker nodes and core add-ons, significantly reducing operational overhead.
• Control Plane: The brain of a Kubernetes cluster, managed by AWS in EKS, responsible for managing the state of the cluster, scheduling pods, and handling API requests.
• Worker Nodes: The machines (e.g., EC2 instances) that run your application pods and workloads in a Kubernetes cluster.
• Pod: The smallest deployable unit in Kubernetes, which can contain one or more containers.
• AMI (Amazon Machine Image): A template that contains the software configuration (operating system, application server, and applications) required to launch an EC2 instance.
• Add-ons: Software components that extend the functionality of a Kubernetes cluster, such as autoscalers, ingress controllers, and storage drivers.
• Carpenter: An open-source, flexible, high-performance Kubernetes cluster autoscaler that can scale clusters faster and more efficiently than the traditional cluster autoscaler.
• Ingress: A Kubernetes API object that manages external access to the services in a cluster, typically HTTP/S.
• EBS Driver: A component that allows Kubernetes to provision and manage Amazon Elastic Block Store (EBS) volumes for persistent storage.
• CNS (CoreDNS): A flexible, extensible DNS server that can serve as the cluster DNS for Kubernetes.
• DaemonSet: A Kubernetes workload object that ensures a copy of a Pod is running on all (or some) nodes in a cluster. Often used for cluster-level functionalities like monitoring agents or logging collectors.
• Kube-proxy (Q proxy): A network proxy that runs on each node and maintains network rules on nodes, enabling network communication to your Pods from inside or outside of your cluster.
• VPC CNI: The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes enables Kubernetes pods to have the same IP address as they do on the Amazon VPC network.
• Managed Instances: The worker nodes in EKS Auto that are fully managed by AWS, including AMI and patching.
• BottleRocket OS: A Linux-based operating system purpose-built by AWS for running containers.
• Bin Packing: An optimization technique where pods are efficiently packed onto the fewest possible EC2 instances to maximize resource utilization and reduce costs.
• Right Sizing: The process of automatically selecting the most appropriate size (CPU, memory) of an EC2 instance based on the resource requirements of the workloads.
• HPA (Horizontal Pod Autoscaler): A Kubernetes feature that automatically scales the number of pod replicas based on observed CPU utilization or other custom metrics.
• Reserved Instances (RIs): An AWS billing discount applied to the use of On-Demand Instances in your account.
• Savings Plans: A flexible pricing model that provides significant savings on AWS compute usage (EC2, Fargate, Lambda) in exchange for a commitment to a consistent amount of compute usage (measured in $/hour) for a 1- or 3-year term.
• Spot Instances: AWS EC2 instances that let you take advantage of unused EC2 capacity at a significantly reduced price.
• Rolling Deployment: A deployment strategy where new versions of an application are gradually rolled out to a subset of instances at a time, allowing for zero downtime updates.
• Cordon and Drain: Kubernetes commands used during node maintenance or upgrades. "Cordon" marks a node as unschedulable, and "drain" safely evicts all pods from a node, rescheduling them on other available nodes.
• Pod Disruption Budget (PDB): A Kubernetes API object that allows you to specify the minimum number or percentage of pods that must be available during voluntary disruptions (like node upgrades).
• Node Selector/Node Affinity: Kubernetes features used to constrain which nodes your pod is eligible to be scheduled on.
• EKS Fargate: An EKS capability that allows you to run Kubernetes pods without needing to provision or manage EC2 instances. Fargate abstracts the underlying compute layer entirely.
• Service Mesh: A dedicated infrastructure layer for handling service-to-service communication, often providing features like traffic management, security, and observability (e.g., Istio, Linkerd).
• Sidecar: A pattern in container design where a secondary container runs alongside the main application container in the same pod, providing supporting functionalities (e.g., logging, monitoring, proxying).
Subscribe to my newsletter
Read articles from Balaji directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Balaji
Balaji
👋 Hi there! I'm Balaji S, a passionate technologist with a focus on AWS, Linux, DevOps, and Kubernetes. 💼 As an experienced DevOps engineer, I specialize in designing, implementing, and optimizing cloud infrastructure on AWS. I have a deep understanding of various AWS services like EC2, S3, RDS, Lambda, and more, and I leverage my expertise to architect scalable and secure solutions. 🐧 With a strong background in Linux systems administration, I'm well-versed in managing and troubleshooting Linux-based environments. I enjoy working with open-source technologies and have a knack for maximizing performance and stability in Linux systems. ⚙️ DevOps is my passion, and I thrive in bridging the gap between development and operations teams. I automate processes, streamline CI/CD pipelines, and implement robust monitoring and logging solutions to ensure continuous delivery and high availability of applications. ☸️ Kubernetes is a key part of my toolkit, and I have hands-on experience in deploying and managing containerized applications in Kubernetes clusters. I'm skilled in creating Helm charts, optimizing resource utilization, and implementing effective scaling strategies for microservices architectures. 📝 On Hashnode, I share my insights, best practices, and tutorials on topics related to AWS, Linux, DevOps, and Kubernetes. Join me on my journey as we explore the latest trends and advancements in cloud-native technologies. ✨ Let's connect and dive into the world of AWS, Linux, DevOps, and Kubernetes together!