Scaling Kubernetes on AWS: Real-World Approaches for Cloud-Native Success


Understanding Declarative Management in Kubernetes π
In Kubernetes, the declarative approach means describing the desired state of your infrastructure using configuration files (typically YAML), rather than issuing specific commands to modify resources. This is in contrast to the imperative approach, where you define explicit steps and commands.
In a declarative setup:
π οΈ You define what the final state should look like
π Kubernetes continuously reconciles the actual state to match the desired state
π§ If something changes unexpectedly (like a crashed pod), Kubernetes will automatically correct it
π‘ Benefits of the Declarative Approach
π©Ή Self-healing: Kubernetes automatically restores the desired state
π§Ύ Version control: YAML files can be tracked and reviewed via Git
π§ͺ Reproducibility: Consistent results across environments
π΅οΈββοΈ Auditability: System changes are documented in code
A simple example: instead of manually starting three containers, you declare:
replicas: 3
If one container crashes, Kubernetes automatically brings up another to maintain the count πͺ
Kubernetes Controllers and the Control Loop Pattern π
Kubernetes uses controllers to maintain this declared state. Examples include:
βοΈ Deployment Controller
Ensures a specific number of replicas are running
Automatically replaces unhealthy pods
π¦ ReplicaSet Controller
- Manages the number of identical pods based on deployment
π₯οΈ Node Controller
- Monitors node availability and manages pod eviction on failure
π§ StatefulSet Controller
Handles deployment and scaling of stateful applications
Ensures unique identity and persistent storage for each pod
All controllers follow a reconciliation loop:
ποΈ Observe the actual state
βοΈ Compare it with the desired state
π§ Act to reconcile differences
π Repeat constantly
Declarative Deployment Example π
Here's a sample deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
When you apply this using:
kubectl apply -f deployment.yaml
Kubernetes automatically manages the deployment lifecycle for you βοΈπ¦
Getting Started with EKS: Declarative Cluster Creation π οΈ
β Prerequisites
An AWS account with admin access π€
Install the following:
π§° AWS CLI (v2)
π§ͺ
kubectl
ποΈ
eksctl
π Need help creating an EC2 instance? Follow this guide: Deploying EC2 Instances with Shared EFS Storage
π Step 1: Configure IAM
Go to IAM > Users
Attach the
AdministratorAccess
policyCreate an access key for CLI use π
π₯οΈ Step 2: Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/bin --install-dir /usr/bin/aws-cli --update
Then configure it:
aws configure
π§ Step 3: Install kubectl
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
kubectl version --short --client
π¨ Step 4: Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/bin
Step 5: Provision an EKS Cluster βοΈπ§
Letβs now create a cluster using a proper instance type like t3.medium
. Using t2.micro
or similar very small instance types often leads to issues like insufficient CPU/memory, unstable networking, or failed pod scheduling.
To avoid those limitations, we recommend using t3.medium
, which offers more resources for workloads and better networking performance β‘
eksctl create cluster \
--name dev \
--region eu-north-1 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 3 \
--managed
β³ This takes about 10β15 minutes. It sets up control plane, VPC, security groups, and auto-scaling worker nodes βοΈπ
π§ Step 6: Update kubeconfig
aws eks --region eu-north-1 update-kubeconfig --name dev
π Deploy a Sample Application
𧬠Clone a sample GitHub repository
sudo yum install -y git
git clone https://github.com/ACloudGuru-Resources/Course_EKS-Basics
cd Course_EKS-Basics
π§ Review the manifests
cat nginx-deployment.yaml
cat nginx-svc.yaml
π§βπ» Apply the service and deployment
kubectl apply -f nginx-svc.yaml
kubectl apply -f nginx-deployment.yaml
π View status
kubectl get svc
kubectl get deployment
kubectl get pod
kubectl get rs
kubectl get node
π Access via Load Balancer
curl "<LOAD_BALANCER_DNS_HOSTNAME>"
Replace with the actual DNS from kubectl get svc
π
Paste the DNS in your browser to view the Nginx welcome page π
π Test High Availability
Stop one worker node in the EC2 console. Kubernetes will:
β Mark the node as
NotReady
π Reschedule pods to another node
βοΈ Possibly launch a new node if within scaling range
Check node and pod status:
kubectl get node
kubectl get pod
Wait a few minutes for the new node and pods to stabilize π
π§Ή Cleanup
When you're done:
eksctl delete cluster --name dev --region eu-north-1
π― Conclusion
By using a declarative approach with Kubernetes on EKS, you unlock the benefits of infrastructure as code, automated reconciliation, and easy scaling π. The declarative model ensures your desired state is always maintained, offering strong foundations for modern, cloud-native application development π
Stay tuned for more hands-on Kubernetes content. Until next time! π
Subscribe to my newsletter
Read articles from Di Nrei Alan Lodam directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
