Kubernetes Deep Dive: Managing Resources, Scaling, and Best Practices

Piyush KabraPiyush Kabra
5 min read

Deleting & Recreating Resources

Managing Kubernetes resources efficiently requires a solid understanding of deleting and recreating them. Resources such as Pods, Services, and Deployments must often be deleted and reconfigured during troubleshooting or updates.

Deleting Pods, Services, and Deployments

To delete a pod:

kubectl delete pod <pod-name>

This command removes a specific pod from the cluster. However, if a deployment manages the pod, it will be recreated automatically to maintain the desired state.

To delete a service:

kubectl delete service <service-name>

This removes the service, stopping traffic routing to the associated pods.

To delete a deployment:

kubectl delete deployment <deployment-name>

Deleting a deployment removes all pods controlled by that deployment.

Recreating Resources Using YAML Files

Instead of manually managing resources, Kubernetes allows defining resources using YAML files. YAML files provide a declarative way to manage resources, making them easier to replicate and version control.

Example YAML for a pod:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
      ports:
        - containerPort: 80

To apply the configuration:

kubectl apply -f example-pod.yaml

This will create or update the pod definition as per the YAML file.

Kubernetes vs. Traditional Container Engines

Kubernetes provides robust orchestration, whereas traditional container engines like Docker alone handle only container lifecycle management.

Differences between Kubernetes and Docker Swarm

FeatureKubernetesDocker Swarm
ScalabilityHighModerate
Load BalancingIn-builtExternal tools required
NetworkingAdvanced CNI pluginsSimpler networking
Auto-healingYesNo
Rolling UpdatesYesLimited
Storage OptionsExtensiveBasic

Advantages of Kubernetes over Traditional Engines

  • Auto-scaling: Kubernetes can automatically scale pods up or down based on demand.

  • Self-healing: It automatically restarts failed pods, ensuring service reliability.

  • Service discovery & load balancing: Kubernetes offers built-in DNS-based service discovery and load balancing.

  • Automated rollouts & rollbacks: Kubernetes ensures zero-downtime updates with controlled rollouts and rollbacks.

  • Declarative Configuration: YAML manifests allow version control and easy replication of configurations.

Replication Controllers & Desired State Maintenance

Ensuring Desired State with Replication Controllers

Replication controllers ensure that a specified number of pod replicas are running at all times. If a pod fails or is deleted, the replication controller automatically replaces it.

Example YAML:

apiVersion: v1
kind: ReplicationController
metadata:
  name: example-controller
spec:
  replicas: 3
  selector:
    app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp-container
          image: nginx:latest

To apply:

kubectl apply -f replication-controller.yaml

Replica Sets and Their Importance

ReplicaSets provide an enhanced version of replication controllers, allowing selection-based pod management.

Example YAML:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: example-replicaset
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp-container
          image: nginx:latest

To create:

kubectl apply -f replicaset.yaml

Dynamic Scaling with Multiple Containers

Scaling Strategies with Multiple Containers

Kubernetes supports multiple scaling strategies:

  1. Horizontal Pod Autoscaler (HPA) - Adds or removes pods based on CPU or memory usage.

  2. Vertical Scaling - Adjusts resource limits on existing pods.

  3. Cluster Autoscaler - Adjusts the number of nodes in the cluster.

Example of HPA:

kubectl autoscale deployment myapp --cpu-percent=50 --min=2 --max=10

Managing Resource Allocation Efficiently

Specify CPU and memory limits in pod definitions to prevent overuse.

resources:
  requests:
    memory: "64Mi"
    cpu: "250m"
  limits:
    memory: "128Mi"
    cpu: "500m"

Labeling & Pod Identification for Better Management

Using Labels and Annotations

Labels help in managing Kubernetes objects.

metadata:
  labels:
    app: myapp

Annotations store additional metadata.

metadata:
  annotations:
    description: "This is a sample annotation"

Label Selectors for Resource Filtering

Use label selectors to filter resources:

kubectl get pods --selector app=myapp

Using create vs. apply for Kubernetes Configuration Updates

Difference between kubectl create and kubectl apply

CommandUse Case
kubectl createCreates resources, fails if they already exist
kubectl applyMerges updates without deletion

Best Practices for Updating Configurations

  • Use kubectl apply for existing resources to prevent unintended deletions.

  • Store YAML manifests in version control for easy rollbacks.

  • Always validate YAML before applying changes.

Minikube Cluster & Status Checking

Setting Up and Running Minikube Locally

Install Minikube and start a cluster:

minikube start

To enable the dashboard:

minikube dashboard

Checking Cluster Status and Verifying Deployments

Check cluster status:

kubectl cluster-info

Check running pods:

kubectl get pods -A

By mastering these concepts, you will be well-equipped to manage Kubernetes environments effectively with best practices!


Here are two of the most frequently asked interview questions :-

1️⃣ What is the difference between kubectl create and kubectl apply? When should you use each?

Answer:

  • kubectl create:

    • Used to create a new resource in Kubernetes.

    • It will fail if the resource already exists.

    • Example:

        kubectl create -f deployment.yaml
      
  • kubectl apply:

    • Used to create or update a resource declaratively.

    • It merges changes into an existing resource rather than replacing it.

    • Example:

        kubectl apply -f deployment.yaml
      
  • When to use each?

    • Use kubectl create when defining a new resource from scratch.

    • Use kubectl apply when making updates or applying changes to an existing resource.


2️⃣ How does Kubernetes ensure the high availability and self-healing of applications?

Answer:

Kubernetes ensures high availability and self-healing using the following mechanisms:
Replication Controllers & ReplicaSets:

  • Maintain a specified number of running pod replicas.

  • If a pod crashes, a new one is automatically created.

Auto-scaling (HPA & Cluster Autoscaler):

  • Horizontal Pod Autoscaler (HPA) adds or removes pods based on CPU/memory usage.

  • Cluster Autoscaler adjusts the number of worker nodes based on demand.

Self-healing Mechanism:

  • If a pod fails, Kubernetes automatically replaces it.

  • If a node goes down, Kubernetes reschedules the pods to other nodes.

Load Balancing & Service Discovery:

  • Kubernetes services distribute traffic across healthy pods to ensure availability.

Would you like more detailed explanations or additional questions? 🚀

0
Subscribe to my newsletter

Read articles from Piyush Kabra directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Piyush Kabra
Piyush Kabra