The Ultimate Guide to Kubernetes Workloads: Streamlining Your Application Deployment
Table of contents
A Step-by-Step Approach to Deploying and Managing Deployments, StatefulSets, DaemonSets, Jobs, and CronJobs in Kubernetes
Kubernetes workloads are sets of instructions that describe how to run a containerized application. Workloads are used to manage the deployment, scaling, and updating of containerized applications in a distributed environment. Kubernetes supports several types of workloads, including Deployments, StatefulSets, DaemonSets, Jobs, and CronJobs.
Deployments:
Deployments are the most common workload type in Kubernetes. They are used to manage the deployment and scaling of a set of identical pods. A pod is the smallest deployable unit in Kubernetes and can contain one or more containers. Deployments allow you to specify the number of replicas you want to run and the desired state of your application. Kubernetes will then manage the deployment and scaling of your pods to ensure that the desired state is met.
To deploy a Deployment in Kubernetes, you need to create a Deployment manifest file in YAML format that defines the desired state of the application. This manifest file should include the following information:
The container image to use
The number of replicas to deploy
The container ports to expose
The deployment strategy (rolling update or recreate)
Any environment variables or configuration files needed by the application
Here is a sample manifest file deployment.yml
apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app-container image: my-app-image:latest ports: - containerPort: 80 env: - name: MY_APP_CONFIG value: my-config-value
In this manifest, we define a deployment for the my-app
application with a RollingUpdate strategy. The RollingUpdate strategy ensures that the deployment process is smooth and controlled by gradually updating the replicas of the deployment. The maxUnavailable
parameter specifies the maximum number of unavailable replicas during the update process, while the maxSurge
parameter defines the maximum number of additional replicas that can be created during the update process.
The replicas
parameter is set to 3, which means that we want to have three replicas of the application running at all times.
The selector
parameter is used to select the pods that belong to this deployment. In this case, we match the app
label with the value my-app
.
In the template
section, we define the pod template for the deployment. It includes a container named my-app-container
that runs the my-app-image:latest
image and exposes port 80
. We also set the environment variable MY_APP_CONFIG
to my-config-value
in this container.
Once you have created the manifest file, you can deploy the application by running the following command:
kubectl apply -f deployment.yaml
StatefulSets:
StatefulSets are used to manage the deployment and scaling of stateful applications. Stateful applications are those that require persistent storage and have a specific order in which they need to be deployed or scaled. StatefulSets ensure that pods are created in a specific order and that each pod has a unique hostname and persistent storage. StatefulSets are commonly used for databases and other stateful applications.
To deploy a StatefulSet in Kubernetes, you need to create a StatefulSet manifest file in YAML format that defines the desired state of the application. This manifest file should include the following information:
The container image to use
The number of replicas to deploy
The container ports to expose
The deployment strategy (rolling update or recreate)
Any environment variables or configuration files needed by the application
Any persistent volume claims needed by the application
Here is a sample manifest file statefulset.yml
apiVersion: apps/v1 kind: StatefulSet metadata: name: my-statefulset labels: app: my-app spec: serviceName: my-service replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: my-image:latest ports: - containerPort: 80 env: - name: MY_CONFIG value: my-config-value terminationGracePeriodSeconds: 30 volumeClaimTemplates: - metadata: name: my-pvc spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi
In this manifest, we define a StatefulSet for the my-app
application. The main difference between a Deployment and a StatefulSet is that a StatefulSet is used when you need to manage stateful applications, such as databases or other applications that require unique network identifiers and stable persistent storage.
In the spec
section, we set the serviceName
parameter to my-service
. This parameter is used to create a headless service that allows each pod in the StatefulSet to have a unique hostname and network identity.
The replicas
parameter is set to 3, which means that we want to have three replicas of the application running at all times.
In the template
section, we define the pod template for the StatefulSet. It includes a container named my-container
that runs the my-image:latest
image and exposes port 80
. We also set the environment variable MY_CONFIG
to my-config-value
in this container. The terminationGracePeriodSeconds
parameter specifies the number of seconds that Kubernetes waits before terminating a pod.
The volumeClaimTemplates
section is used to define a Persistent Volume Claim (PVC) for each pod in the StatefulSet. In this example, we create a PVC named my-pvc
with a storage request of 1Gi and access mode set to ReadWriteOnce
.
Once you have created the manifest file, you can deploy the application by running the following command:
kubectl apply -f statefulset.yaml
DaemonSets:
DaemonSets are used to manage the deployment of pods that need to run on every node in a Kubernetes cluster. DaemonSets are commonly used for monitoring agents, logging agents, and other infrastructure-related tasks. With DaemonSets, you can ensure that a specific pod is running on every node in your cluster.
To deploy a DaemonSet in Kubernetes, you need to create a DaemonSet manifest file in YAML format that defines the desired state of the application. This manifest file should include the following information:
The container image to use
The container ports to expose
The resources needed by the application
Here is a sample manifest file daemonset.yml
apiVersion: apps/v1 kind: DaemonSet metadata: name: my-daemonset labels: app: my-app spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: my-image:latest ports: - containerPort: 80 env: - name: MY_CONFIG value: my-config-value
In this manifest, we define a DaemonSet for the my-app
application. A DaemonSet ensures that a copy of a pod is running on each node in the cluster.
The selector
parameter is used to select the nodes that the DaemonSet should run on. In this case, we match the app
label with the value my-app
.
In the template
section, we define the pod template for the DaemonSet. It includes a container named my-container
that runs the my-image:latest
image and exposes port 80
. We also set the environment variable MY_CONFIG
to my-config-value
in this container.
The main difference between a DaemonSet and a Deployment or StatefulSet is that a DaemonSet ensures that a copy of a pod is running on each node in the cluster. This is useful when you need to run a daemon process, such as a monitoring agent or logging agent, on every node in the cluster.
Once you have created the manifest file, you can deploy the application by running the following command:
kubectl apply -f daemonset.yaml
Jobs:
Jobs are used to manage batch processes in Kubernetes. Jobs allow you to run a containerized application to completion and then terminate the pod. Jobs are commonly used for data processing, backups, and other batch tasks.
To deploy a Job in Kubernetes, you need to create a Job manifest file in YAML format that defines the desired state of the application. This manifest file should include the following information:
The container image to use
The command or script to run
The resources needed by the application
Here is a sample manifest file job.yml
apiVersion: batch/v1 kind: Job metadata: name: my-job labels: app: my-app spec: template: metadata: name: my-job labels: app: my-app spec: containers: - name: my-container image: my-image:latest command: ["echo", "Hello, World!"] restartPolicy: Never
In this manifest, we define a Job for the my-app
application. A Job creates one or more pods and ensures that the specified task is completed successfully.
The template
section defines the pod template for the Job. It includes a container named my-container
that runs the my-image:latest
image and runs the echo "Hello, World!"
command. When the command completes successfully, the Job completes successfully as well.
The main difference between a Job and a Deployment, StatefulSet, or DaemonSet is that a Job runs one or more pods to completion, whereas the other workload types ensure that a certain number of pods are always running.
The restartPolicy
parameter is set to Never
, which means that the container is not restarted if it fails or exits. This is because Jobs are designed to run to completion and not to run continuously like other workload types.
Once you have created the manifest file, you can deploy the application by running the following command:
kubectl apply -f job.yaml
CronJobs:
CronJobs are used to manage scheduled tasks in Kubernetes. CronJobs allow you to specify a schedule for running a containerized application. CronJobs are commonly used for backups, data processing, and other tasks that need to run on a regular schedule.
To deploy a CronJob in Kubernetes, you need to create a CronJob manifest file in YAML format that defines the desired state of the application. This manifest file should include the following information:
The container image to use
The command or script to run
The schedule for running the application
The resources needed by the application
Here is a sample manifest file cronjob.yml
apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cronjob labels: app: my-app spec: schedule: "*/5 * * * *" jobTemplate: spec: template: metadata: labels: app: my-app spec: containers: - name: my-container image: my-image:latest command: ["echo", "Hello, World!"] restartPolicy: OnFailure
In this manifest, we define a CronJob for the my-app
application. A CronJob creates one or more jobs on a specified schedule.
The schedule
parameter is set to "*/5 * * * *"
which means that the job will run every five minutes.
The jobTemplate
section defines the job template for the CronJob. It includes a pod template with a container named my-container
that runs the my-image:latest
image and runs the echo "Hello, World!"
command.
The main difference between a CronJob and a Job is that a CronJob runs one or more jobs on a specified schedule, whereas a Job runs one or more pods to completion.
The restartPolicy
parameter is set to OnFailure
, which means that the container is restarted if it fails or exits with a non-zero exit code.
Once you have created the manifest file, you can deploy the application by running the following command:
kubectl apply -f cronjob.yaml
Conclusion:
Kubernetes workloads provide a powerful platform for managing and deploying containerized applications in a distributed environment. By understanding the different types of workloads available in Kubernetes, you can choose the right tool for your application's specific needs. Whether you are deploying stateful applications, running batch jobs, or scheduling tasks, Kubernetes workloads offer the flexibility and scalability required to manage and deploy containerized applications with ease.
Subscribe to my newsletter
Read articles from Amol Ovhal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Amol Ovhal
Amol Ovhal
I'm Amol, a DevOps Engineer who enjoys automation, continuous integration, and deployment. With extensive Hands-on experience in DevOps and Cloud Computing, I am proficient in various tools and technologies related to infrastructure automation, containerization, cloud platforms, monitoring and logging, and CI/CD. My ultimate objective is to assist organisations in achieving quicker, more effective software delivery while maintaining high levels of quality and dependability.