Topic 2: Deployment strategy for Kubernetes
In this post, let’s delve into Kubernetes deployment common strategies, looking at the advantages and disadvantages of each. A suitable deployment strategy that can minimize downtime and increase reliability when releasing your application
On manifest file level, there are 2 which you can specify
Recreate
Rolling
1. Recreate Deployment
Recreating deployment terminates all the pods and replaces them with the new version. This can be useful in situations where an old and new version of the application cannot run at the same time. The amount of downtime incurred using this strategy will depend on how long the application takes to shut down and start back up. The application state is entirely renewed since they are completely replaced.
Sample section in the yaml file
spec:
replicas: 3
strategy:
type: Recreate
Rolling Deployment
Rolling deployments are the kubernetes default offering. A rolling deployment replaces pods running the old version of the application with the new version without downtime.
How it is achieve is by using Readiness probes. Readiness probe monitor when the application becomes available. If the probes fail, no traffic will be sent to the pod. An application may also become overloaded with traffic and cause the probe to fail, preventing more traffic from being sent to it and allowing it to recover.
Once the readiness probe detects the new version is available, the old version is removed. If there is a problem, the rollout can be stopped and rolled back to the previous version. Because each pod is replaced one by one, deployments can take time for larger clusters. If a new deployment is triggered before another has finished, the version is updated to the version specified in the new deployment, and the previous deployment version is disregarded where it has not yet been applied.
In the manifest file spec: -> strategy: you can make use of two optional parameters —
maxSurge
andmaxUnavailable
. Both can be specified using a percentage or absolute number. A percentage figure should be used when Horizontal Pod Autoscaling is used.MaxSurge specifies the maximum number of pods the Deployment is allowed to create at one time.
MaxUnavailable specifies the maximum number of pods that are allowed to be unavailable during the rollout.
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 # Specifies the maximum number of pods that can be unavailable during the update process
maxSurge: 1
A rolling deployment is triggered when something in the pod spec is changed, such as when the image, environment, or label of a pod is updated. A pod image can be updated using the command kubectl set image
.
kubectl set image deployment/newapp nginx=nginx:1.18
Monitor the update process:
kubectl rollout status deployment/newapp
What about the other strategies like Blue-Green Deployment
A blue-green deployment in concept is having 2 sets of applications. Deploying the new application version (green) alongside the previous one (blue).
A load balancer in the form of the Service selector object is used to direct the traffic to the new application (green) instead of the old one when it has been tested and verified. Do understand that there is twice the cost during deployment period. This is because the amount of application resources need to be up during the deployment period.
kind: Service metadata: name: webapp01 labels: app: webapp selector: app: webapp version: v1.0.0
In the blue deployment
kind: Deployment metadata: name: webapp01 spec: template: metadata: labels: app: webapp version: "v1.0.0"
When we want to direct traffic to the new (green) version of the app, we update the manifest to point to the new version, v1.1.0.
kind: Service metadata: name: webapp02 labels: app: webapp selector: app: webapp version: v1.1.0
The green deployment
kind: Deployment metadata: name: webapp02 spec: template: metadata: labels: app: webapp version: "v1.1.0"
Canary Deployment
A Canary deployment strategy can be used if there is scenarios whereby a subset of the users are selected to test a new version of the application or when the new version functionality is still in doubt. This involves deploying a new version of the application alongside the old one, with the old version of the application serving most users and the newer version serving a small pool of test users. The new deployment is rolled out to more users if it is successful.
Take for example you have about 50 running pods, 10% of it can be use on v2 while the 90% is running on v1
However this sort of strategy can be better achieve through load balancer such as NGINX, HAProxy, or Traefik, or service mesh like Istio, Hashicorps Consul, or Linkerd or cloud provider’s version like MSE, App Mesh
Subscribe to my newsletter
Read articles from Kev directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by