K8s Important Interview Questions and Answers | Day 37 of | 90days of devops.
Table of contents
- 1. What is Kubernetes?
- Why is Kubernetes Important?
- 2. What is the difference between Docker Swarn and Kubernetes?
- 3. How does Kubernetes handle network communication between containers?
- 4.How does Kubernetes handle scaling of applications?
- 5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet?
- 1. What is a Kubernetes Deployment?
- 2. How does it differ from a ReplicaSet?
- 6. Exploring Rolling Updates in Kubernetes
- 7. How does Kubernetes handle network security and access control?
- 8. Deploying Highly Available Applications in Kubernetes
- 9. What is a namespace in Kubernetes? Which namespace any pod takes if we don’t specify any namespace?
- 10. How does ingress help in Kubernetes?
- 11. Explain different types of services in Kubernetes
- 12. Can you explain the concept of self-healing in Kubernetes and give examples of how it works?
- 13. Storage Management for Containers in Kubernetes
- 14. How does the NodePort service work?
- 15. Multimode vs. Single-Node Cluster in Kubernetes
- 16. Create vs. Apply in Kubernetes
- Conclusion
1. What is Kubernetes?
Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Originally developed by Google, Kubernetes has gained widespread adoption in the industry due to its robust and flexible nature.
Key Properties of Kubernetes:
Container Orchestration:
Kubernetes excels at orchestrating containers, encapsulated, lightweight, and portable units that package applications and their dependencies. It provides a unified platform for automating the deployment, scaling, and operation of application containers.
Cluster Management:
K8s operates on a cluster-based architecture, where a cluster comprises a set of nodes, each running containerized applications. The master node manages the overall state and configuration of the cluster, ensuring seamless communication and resource allocation among the worker nodes.
Declarative Configuration:
Kubernetes allows users to define the desired state of their applications through declarative configuration files. This eliminates the need for manual intervention, as Kubernetes continuously works to align the current state of the application with the declared state.
Service Discovery and Load Balancing:
K8s includes built-in mechanisms for service discovery and load balancing. Services can be exposed within the cluster or externally, and load balancing ensures even distribution of traffic among the available instances.
Scaling and Self-healing:
One of the standout features of Kubernetes is its ability to scale applications dynamically based on demand. It can automatically adjust the number of running containers to handle increased traffic. Additionally, Kubernetes supports self-healing by replacing failed containers or rescheduling them to healthy nodes.
Why is Kubernetes Important?
Portability and Consistency:
Kubernetes abstracts away the underlying infrastructure, providing a consistent environment for applications across different environments. This portability allows developers to build once and deploy anywhere, whether on-premises, in the cloud, or in hybrid setups.
Efficient Resource Utilization:
By automating the distribution of workloads across clusters, Kubernetes optimizes resource utilization. This results in cost savings and improved performance as applications dynamically scale based on demand.
Enhanced Developer Productivity:
Kubernetes simplifies the deployment and management of applications, enabling developers to focus more on writing code and less on the intricacies of infrastructure. This accelerates the development lifecycle and promotes a more efficient and collaborative workflow.
Scalability and Flexibility:
Whether you're a startup with a handful of containers or an enterprise managing a vast microservices architecture, Kubernetes scales effortlessly. Its flexible design accommodates diverse workloads, making it an ideal solution for organizations of all sizes.
Community and Ecosystem:
Kubernetes boasts a vibrant and thriving community, contributing to its continual improvement and evolution. The extensive ecosystem of tools and extensions built around Kubernetes further enhances its capabilities, making it a powerful and future-proof choice for container orchestration.
2. What is the difference between Docker Swarn and Kubernetes?
1. Architecture and Design:
Docker Swarm:
Docker Swarm is the native clustering and orchestration solution provided by Docker. It follows a simpler architecture compared to Kubernetes, making it easy to set up and manage. Docker Swarm operates using a manager-worker model, where managers control the overall state of the swarm and workers execute tasks.
Example:
# Initialize a Docker Swarm
docker swarm init
# Add a worker node
docker swarm join --token <worker-token> <manager-ip>
# Deploy a service
docker service create --name my-web-app -p 8080:80 my-web-app-image
Kubernetes:
Kubernetes, often abbreviated as K8s, is a more robust and feature-rich container orchestration platform. It follows a master-node architecture where the master node manages the cluster and nodes execute tasks. Kubernetes uses a declarative approach, allowing users to define the desired state, and it automatically works towards achieving and maintaining that state.
Example:
# Define a simple deployment in a Kubernetes manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web-app
spec:
replicas: 3
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: web-app-container
image: my-web-app-image
ports:
- containerPort: 80
2. Scaling and Load Balancing:
Docker Swarm:
Docker Swarm provides straightforward scaling options with built-in load balancing. It supports both manual and automatic scaling based on the desired number of replicas. Load balancing is achieved through the swarm manager, which distributes incoming requests across the available nodes.
Example:
# Scale a service in Docker Swarm
docker service scale my-web-app=5
Kubernetes:
Kubernetes excels in automatic scaling and advanced load balancing. It allows users to define Horizontal Pod Autoscalers (HPA) that automatically adjust the number of pod replicas based on resource utilization or custom metrics. Kubernetes also integrates with various load balancing solutions, offering flexibility and customization.
Example:
# Define an HPA in a Kubernetes manifest
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-web-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 70
3. Ecosystem and Community Support:
Docker Swarm:
Docker Swarm benefits from seamless integration with the Docker ecosystem. It is well-suited for users already familiar with Docker tools. However, the community support is not as extensive as Kubernetes.
Kubernetes:
Kubernetes boasts a large and active community, making it a go-to choice for enterprises and organizations seeking comprehensive support. It has a rich ecosystem with a vast array of tools and plugins, facilitating seamless integration with various services.
3. How does Kubernetes handle network communication between containers?
Understanding Kubernetes Networking Basics
Kubernetes orchestrates containers within clusters, and efficient communication is fundamental for the proper functioning of applications. Kubernetes employs a flat, virtual network where each pod (the smallest deployable unit in Kubernetes) has a unique IP address. This flat networking model facilitates easy communication between containers across different nodes within the cluster.
Pod-to-Pod Communication
1. IP Address Assignment:
When a pod is scheduled on a node, it is assigned a unique IP address within the cluster. This IP is used for intra-cluster communication.
2. Pod-to-Pod Communication within Nodes:
Containers within the same pod communicate using the localhost interface. This local communication is fast and efficient, as it avoids the overhead of network routing.
3. Pod-to-Pod Communication across Nodes:
To enable communication between pods on different nodes, Kubernetes utilizes the Container Network Interface (CNI). CNI plugins configure the network interfaces of containers, allowing them to communicate seamlessly, even if they reside on different physical machines.
Services for Service Discovery and Load Balancing
4. Kubernetes Services:
Kubernetes abstracts pod IP addresses behind a Service. Services act as stable endpoints, providing a consistent way to access pods. They facilitate service discovery and load balancing by distributing incoming traffic across multiple pods.
5. Service Types:
Kubernetes supports various service types, including ClusterIP, NodePort, and LoadBalancer, each serving specific use cases. ClusterIP allows internal communication, NodePort exposes services on a specific port on each node, and LoadBalancer provides external access through cloud provider load balancers.
Ingress for External Access
6. Ingress Controllers:
For external access to services, Kubernetes employs Ingress controllers. These controllers manage external access rules, such as routing based on URL paths or domain names, and provide SSL termination for secure communication.
7. Ingress Resources:
DevOps engineers define Ingress resources to configure external access rules. These resources act as a powerful tool to control how external traffic is directed to services within the cluster.
Network Policies for Security
8. Network Policies:
Security is a top priority in Kubernetes networking. Network Policies allow DevOps teams to define rules governing pod-to-pod communication, restricting or allowing traffic based on specific criteria such as labels, namespaces, and ports.
9. Role of Network Plugins:
Kubernetes supports a variety of network plugins, including Calico, Flannel, and Weave. These plugins implement network policies and handle the routing of traffic between pods, contributing to the overall security and reliability of the Kubernetes network.
4.How does Kubernetes handle scaling of applications?
Understanding Kubernetes Scaling
Scaling in the context of Kubernetes refers to the ability to dynamically adjust the number of running instances (pods) of a particular application based on demand. Kubernetes provides two main types of scaling: horizontal scaling (replica scaling) and vertical scaling (resource scaling).
1. Horizontal Scaling
Horizontal scaling involves adding or removing identical instances of an application to distribute the load and enhance performance. Kubernetes achieves horizontal scaling through the use of ReplicaSets and Deployments.
ReplicaSets
ReplicaSets are controllers in Kubernetes responsible for ensuring that a specified number of identical pods are running at all times. By defining the desired number of replicas in a ReplicaSet, Kubernetes automatically adjusts the number of running pods to match the desired state.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: myapp-image:latest
In the above example, the ReplicaSet ensures that there are always three instances of the "myapp" pod running.
Deployments
Deployments are higher-level abstractions that manage ReplicaSets, providing additional features such as rolling updates and rollback capabilities. They make it easier to declaratively manage the desired state of your application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: myapp-image:latest
2. Vertical Scaling
While horizontal scaling deals with adding more instances of an application, vertical scaling involves adjusting the resources allocated to individual instances. Kubernetes achieves vertical scaling through Pod Autoscaling and Cluster Autoscaling.
Pod Autoscaling
Pod Autoscaling dynamically adjusts the number of pods in a Deployment or ReplicaSet based on observed CPU or memory utilization.
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
In this example, the HorizontalPodAutoscaler ensures that the number of pods in the "myapp-deployment" scales between 2 and 5 based on CPU utilization, targeting an average utilization of 80%.
Cluster Autoscaling
Cluster Autoscaling adjusts the number of nodes in a cluster based on resource demands. It ensures that there are enough resources available to meet the demands of the running pods.
apiVersion: autoscaling/v1
kind: ClusterAutoscaler
metadata:
name: cluster-autoscaler
spec:
scaleDownUtilizationThreshold: 0.5
The above example sets the scaleDownUtilizationThreshold, specifying that a node can be removed if its utilization drops below 50%.
5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet?
1. What is a Kubernetes Deployment?
At its core, a Kubernetes Deployment is a resource object in Kubernetes that provides declarative updates to applications. It allows you to describe the desired state for your application, and Kubernetes takes care of implementing and maintaining that state. Deployments are a higher-level abstraction built on top of ReplicaSets, providing additional features for managing application updates and rollbacks.
2. How does it differ from a ReplicaSet?
a. Purpose and Abstraction:
A ReplicaSet, on the other hand, is a lower-level controller that ensures a specified number of replicas of a pod are running at all times. While it's effective at maintaining a desired number of pod instances, it lacks some of the advanced deployment strategies that Deployments offer.
b. Updates and Rollbacks:
One significant distinction lies in how Deployments handle updates and rollbacks seamlessly. Deployments enable you to perform rolling updates to your application, ensuring zero downtime by gradually replacing old pods with new ones. If an update goes awry, Deployments allow for easy rollbacks to a previous, stable version.
c. Versioning and Replication:
Deployments introduce the concept of versioning, allowing you to easily manage and track different versions of your application. This versioning is crucial when rolling back to a previous state. ReplicaSets, while handling replication well, lack the versioning and update strategies inherent in Deployments.
d. Self-Healing:
Both Deployments and ReplicaSets contribute to the self-healing nature of Kubernetes by ensuring the desired number of pod replicas are always running. However, Deployments take this a step further by offering more sophisticated health checks and strategies for managing unhealthy pods during updates.
6. Exploring Rolling Updates in Kubernetes
Rolling updates are a key feature in Kubernetes that allows seamless updates without downtime. During a rolling update, new pods are gradually rolled out while the old ones are phased out. This ensures continuous availability and avoids service interruptions. Kubernetes achieves this by gradually replacing instances, monitoring health, and automatically adjusting based on specified criteria.
7. How does Kubernetes handle network security and access control?
Kubernetes implements network security through various mechanisms, including Network Policies. These policies define how pods communicate with each other and external entities. Additionally, Kubernetes employs Role-Based Access Control (RBAC) to regulate user access, ensuring only authorized actions are performed.
8. Deploying Highly Available Applications in Kubernetes
To deploy a highly available application in Kubernetes, one can utilize features like ReplicaSets, Deployments, and StatefulSets. These components ensure that a specified number of pod replicas are always running, providing resilience against failures and enabling continuous availability.
Deploying a Highly Available Web Application
Step 1: Create a Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 3 #this defines the desired number of replicas of application you want to create.
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp-container
image: your-webapp-image:latest # this the new image that you want to update.
ports:
- containerPort: 80
livenessProbe: # This defines a health check for your pod using an HTTP request.
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10 # This defines how long to wait before performing the first probe.
periodSeconds: 10 # This defines how often to perform the probe.
failureThreshold: 3 # This defines how many failures to tolerate before restarting the pod.
readinessProbe: # This defines a readiness check for your pod using an HTTP request.
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 10 # This defines how long to wait before performing the first probe.
periodSeconds: 10 # This defines how often to perform the probe.
successThreshold: 2 # This defines how many successes to require before marking the pod as ready.
serviceAccountName: my-app-sa # This defines the service account that the pod will use to access the Kubernetes API server.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2 # This means that up to 2 pods can be unavailable during the update process.
maxSurge: 3 # This means that up to 3 more pods than the desired number can be created during the update process.
Explanation:
This YAML defines a Deployment named
webapp-deployment
with three replicas.The selector ensures that pods with the label
app: webapp
are managed by this deployment.The pod template specifies a container with your web application image running on port 80.
Step 2: Expose the Deployment with a Service
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp # This matches the label of the pods that are part of the service.
ports:
- protocol: TCP
port: 80 # This is the port that the service will expose externally.
targetPort: 80 # This is the port that the pods will listen on internally.
type: LoadBalancer # This means that the service will be exposed externally using a cloud provider's load balancer.
Explanation:
This YAML defines a Service named
webapp-service
that selects pods with the labelapp: webapp
.It exposes port 80 on the service, forwarding traffic to the pods' port 80.
The service type is set to
LoadBalancer
to distribute external traffic among the pod replicas.
Step 3: Apply the Configurations
kubectl apply -f webapp-deployment.yaml
kubectl apply -f webapp-service.yaml
Apply the configurations using the kubectl apply
command.
Step 4: Verify Deployment
kubectl get pods
kubectl get services
Check that the pods are running and the service has an external IP assigned.
Now, your highly available web application is deployed in Kubernetes. The deployment ensures that there are always three replicas of the web application running, and the service provides external access, distributing traffic among the replicas for high availability. If one pod fails, the Deployment controller automatically replaces it, maintaining the desired replica count.
9. What is a namespace in Kubernetes? Which namespace any pod takes if we don’t specify any namespace?
In Kubernetes, a namespace is a virtual cluster that provides a scope for resources. If a pod is not assigned to a specific namespace, it is placed in the default namespace. Namespaces help in organizing and isolating resources within a cluster.
10. How does ingress help in Kubernetes?
Ingress in Kubernetes acts as an API object that manages external access to services. It allows the definition of rules for routing external traffic to services, facilitating the implementation of complex routing scenarios, SSL termination, and load balancing.
11. Explain different types of services in Kubernetes
In Kubernetes, services play a crucial role in enabling communication and networking between different parts of your application, providing a stable endpoint for accessing your application components. There are several types of services in Kubernetes, each serving specific purposes. Let's explore the different types:
ClusterIP:
Purpose: Exposes the service on a cluster-internal IP.
Use Case: Typically used for communication between different microservices within the cluster.
NodePort:
Purpose: Exposes the service on each node's IP at a static port.
Use Case: Allows external traffic to reach the service directly through the specified port on each node. Suitable for development and testing.
LoadBalancer:
Purpose: Exposes the service externally using a cloud provider's load balancer.
Use Case: Ideal for scenarios where you need to distribute external traffic among multiple nodes running the service.
ExternalName:
Purpose: Maps the service to the contents of the
externalName
field.Use Case: Used for giving a service a DNS name, redirecting requests to the specified external name.
Now, let's explore how these services function:
ClusterIP:
Internal Cluster Communication: Services of type ClusterIP are accessible only within the cluster.
Service Discovery: Enables easy discovery of services by other pods within the same namespace.
NodePort:
External Access: Makes the service accessible externally by binding the service to each node's IP at a specific port.
Node IP + Port: External clients can reach the service using any node's IP and the specified NodePort.
LoadBalancer:
Automatic Load Balancing: Utilizes the cloud provider's load balancer to distribute external traffic among the service instances.
External IP: Assigns a public IP to the service, making it accessible from outside the cluster.
ExternalName:
Mapping to External Service: Redirects requests for the service to the specified external name.
Use with DNS: Enables using an external service by giving it a DNS name in the cluster.
Choosing the right service type depends on your application's architecture and requirements. For internal communication within the cluster, ClusterIP is often sufficient. NodePort and LoadBalancer are suitable for scenarios where external access is necessary, with LoadBalancer being more feature-rich but potentially involving additional costs. ExternalName is useful when you want to reference an external service by a DNS name within the Kubernetes cluster.
Understanding these different service types is essential for designing and managing the networking aspects of your applications in a Kubernetes environment.
12. Can you explain the concept of self-healing in Kubernetes and give examples of how it works?
Key Components of Self-Healing in Kubernetes:
Liveness Probes:
Liveness probes are used to determine if a container is running as expected.
If a container fails a liveness probe, Kubernetes considers it unhealthy and attempts to restart the container.
Example:
livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 3 periodSeconds: 3
Readiness Probes:
Readiness probes indicate whether a container is ready to serve traffic.
If a container fails a readiness probe, it is temporarily removed from service until it passes the probe.
Example:
readinessProbe: httpGet: path: /readiness port: 8080 initialDelaySeconds: 5 periodSeconds: 5
Replication Controllers and ReplicaSets:
These components ensure that a specified number of pod replicas are maintained.
If a pod fails or is terminated, the replication controller or replica set automatically creates new pods to replace them.
Example:
apiVersion: apps/v1 kind: ReplicaSet metadata: name: example spec: replicas: 3 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: - name: app-container image: example-image
Deployments:
Deployments build on replica sets and provide declarative updates to applications.
If a new version of an application is deployed, the deployment controller automatically manages the rollout process, ensuring zero-downtime updates.
Example:
apiVersion: apps/v1 kind: Deployment metadata: name: example spec: replicas: 3 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: - name: app-container image: example-image:v2
Example Scenario:
Consider an application running in a Kubernetes cluster with deployment and liveness probes. If a pod becomes unresponsive due to a software issue or resource constraints, the liveness probe detects the problem. Kubernetes then automatically terminates the unhealthy pod and starts a new one. The deployment controller ensures that the desired number of replicas is maintained, providing continuous availability.
In this way, Kubernetes mitigates potential issues, improves application reliability, and reduces the need for manual intervention, embodying the concept of self-healing in containerized environments.
13. Storage Management for Containers in Kubernetes
Kubernetes provides a robust framework for managing storage for containers through the concepts of Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). This storage management system ensures that data persists across pod restarts and rescheduling.
Here's a breakdown of how Kubernetes handles storage management for containers:
1. Persistent Volumes (PVs):
Abstraction of Physical Storage: Persistent Volumes abstract the underlying physical storage, providing a uniform interface for the storage backend. This abstraction allows administrators to manage storage resources independently of the application.
Storage Classes: Storage Classes define different classes of storage with varying performance characteristics and provisions. Users can request storage with specific classes based on their application requirements.
Dynamic Provisioning: Kubernetes supports dynamic provisioning, allowing storage volumes to be automatically created when a PVC is created. This dynamic provisioning simplifies storage management, as administrators don't need to pre-provision volumes for every PVC.
2. Persistent Volume Claims (PVCs):
Requesting Storage: Developers request storage resources for their applications by creating Persistent Volume Claims. PVCs act as a request for a specific amount and class of storage.
Binding to Persistent Volumes: When a PVC is created, it is dynamically bound to an available PV that satisfies the criteria specified in the PVC (e.g., storage class, access mode, capacity). This binding ensures that the application gets the required storage.
3. Volume Mounts in Pods:
Mounting Volumes: Containers within pods can use volumes by mounting them. The volumes are sourced from the PVCs, which, in turn, are bound to the PVs. This allows data to persist across pod restarts and ensures that the application can access the required storage.
Read/Write Modes: Kubernetes supports different access modes for volumes, such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany. These modes determine whether the volume can be mounted by a single or multiple pods simultaneously.
4. Dynamic Provisioning and Storage Classes:
Automatic Volume Creation: When a PVC is created with a specified storage class and the cluster has dynamic provisioning configured for that class, Kubernetes automatically creates a new PV to satisfy the PVC.
Storage Class Parameters: Storage classes can have parameters, such as disk type, I/O priority, or any other storage-specific configuration. These parameters help define the characteristics of the storage provisioned dynamically.
5. Storage Plugins:
- Extensibility: Kubernetes supports various storage plugins that allow integration with different storage providers. These plugins enable seamless integration with cloud storage solutions, network-attached storage (NAS), or other storage systems.
14. How does the NodePort service work?
NodePort is a service type in Kubernetes that exposes a service on a static port on each node. External traffic is then directed to this port, providing external access to the service.
15. Multimode vs. Single-Node Cluster in Kubernetes
A multimode cluster involves multiple nodes, distributing workloads for scalability and reliability. In contrast, a single-node cluster runs on a single machine, suitable for development and testing but lacking the scalability and fault tolerance of a multinode setup.
16. Create vs. Apply in Kubernetes
The 'create' command in Kubernetes creates a resource if it doesn't exist, while 'apply' updates a resource or creates it if it doesn't exist. 'Apply' is preferable for managing resources in a declarative way, allowing configuration changes without risking unintended modifications.
Conclusion
Mastering Kubernetes as a DevOps engineer involves understanding these fundamental concepts. Whether it's managing updates, ensuring security, deploying highly available applications, or handling storage, a solid grasp of Kubernetes principles is essential for orchestrating containerized workloads effectively.
Subscribe to my newsletter
Read articles from Avanish Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Avanish Singh
Avanish Singh
I am a Aspiring DevOps Engineer, looking to build my career in Devops | learning Devops