Comparing Sidecar and Init Containers in Production: Understanding Trivy and Kyverno
Image Security : Trivy
The image has several layer and sometimes in one of these layers there
CVE(Common Vulnarability and Exposers),
in simple words a kind of backdoor is there so chances of hacking is greater.So we want to make sure that the
image
we are using must besecure
.
So to prevent this there is tool called trivy
Trivy is an open-source vulnerability scanner
that detects misconfigurations, secrets, and vulnerabilities in containers
and other artifacts
:
Vulnerabilities: Trivy scans for vulnerabilities in OS packages and language-specific packages.
Misconfigurations: Trivy scans Infrastructure as Code (IaC) files like Kubernetes and Terraform to detect potential configuration issues.
Secrets: Trivy scans for hardcoded secrets like passwords, API keys, and tokens.
Targets: Trivy can scan container images, file systems, Git repositories, Kubernetes clusters, and resources.
Support: Trivy supports multiple formats, including container images, tar archives, and image directories.
Trivy is simple to use, and all you need to do is install the binary and specify a target. You can generate scan reports and store them long term in S3 buckets or other long-term solutions.
Two way to use trivy:
Manual scanning
Add a stage CICD pipeline
Here are some steps for using Trivy to scan an image:
Install Trivy : Trivy can be installed as a binary, container image, or snap package.
# command to install trivy sudo snap install trivy
2. Scan an image: Use the trivy command followed by the image name.
trivy image nginx
# To check only critical entries: trivy image nginx --severity CRITICAL # To check only high entries: trivy image nginx --severity HIGH # To check only low entries: trivy image nginx --severity LOW
Configure Trivy
: Customize the scan by specifying the output format, severity level, and more.Interpret the results
: The results will be displayed in the terminal and show the vulnerabilities found in the image.Remediate
: Take action to remediate the vulnerabilities.
Trivy can also be integrated into a development workflow for continuous scanning. It can be deployed using Dockerfiles, Kubernetes workloads, and cloud provider integrations.
Trivy has a set of built-in rules for secret scanning, including AWS access key, GCP service account, GitHub personal access token, GitLab personal access token, and Slack access token. These rules can be extended or modified with a configuration file.
NOTE:
Our task is to minimize all the CVE’s in the production.
CVE are of three types:
Critical: Vulnerabilities allowing remote code execution or full system compromise with minimal user interaction; must not be present in production or staging and require immediate remediation before deployment.
High: Serious vulnerabilities that could lead to system compromise or data leakage but require more complex conditions to exploit; should be patched in staging and avoided in production.
Low: Minor vulnerabilities with limited security impact or requiring unlikely attack scenarios; can be present in production and staging but should be addressed during regular maintenance.
A tool bat instead of cat
# command to install in Ubuntu 22.04 LTS
sudo apt install wget
wget https://github.com/sharkdp/bat/releases/download/v0.23.0/bat_0.23.0_amd64.deb
sudo dpkg -i bat_0.23.0_amd64.deb
To secure image:
multistage build
Distroless Image
Trivy tool
Admission Control: Kyverno
What is Kyverno?
Kyverno is an open-source policy engine that helps improve the security and compliance of Kubernetes environments. It allows users to manage, validate, and enforce configurations using Kubernetes native resources.
For this you just need to know YAML
.
Official Documentation of kyverno
:
https://release-1-8-0.kyverno.io/docs/introduction/
What is OPA?
OPA stands for Open Policy Agent, an open-source policy engine that helps enforce policies for cloud infrastructure.
Open Policy Agent (OPA) uses the Rego language
to write policies.
Official Documntation ofOPA(Open Policy Agent)
:
https://www.openpolicyagent.org/docs/latest/
What is Helm?
Helm is a package manager for Kubernetes that uses charts to deploy applications. Git is a version control system (VCS) that is essential for modern software development.
Official Docs: https://helm.sh/docs/intro/quickstart/
Some command of Helm:
# to add a repo
helm repo add <repo name> <repo link>
# to update repo
helm repo update
- CRDs (Custom Resource Definitions): Define custom Kubernetes resources that Kyverno uses to enforce policies.
A CRD (Custom Resource Definition) in Kubernetes allows developers to extend the Kubernetes API by defining custom resources specific to their application needs. CRDs enable the creation of custom objects and controllers that can behave like native Kubernetes resources (e.g., Pods, Services). These custom resources can be used to manage application-specific logic and automation.
In the case of Kyverno, CRDs are used to define custom policies for security, compliance, and resource management within Kubernetes clusters. By using CRDs, Kyverno extends Kubernetes' default functionality, allowing users to write policy definitions as custom resources. This extension integrates seamlessly with the Kubernetes API, enabling Kyverno to apply policies to resources throughout the cluster in real-time.
So, if someone wanted to create a solution like Kyverno to extend the Kubernetes API, they would:
- Define CRDs that represent custom resources specific to their application logic or policy enforcement.
- Build controllers that handle the creation, modification, and enforcement of these custom resources.
- Integrate these custom resources into the Kubernetes control plane to behave like native Kubernetes objects, making the system more extensible and customizable.
In Kyverno's case, the custom resources are policy-related objects that enforce security and operational rules across the cluster.
Admission Controller: Intercepts and validates requests to the Kubernetes API, ensuring policy compliance before resources are created or modified.
Reports Controller: Generates policy violation and compliance reports for resources in the cluster and can send to audit team.
Cleanup Controller: Automatically removes outdated resources based on defined policies and cleanup rules.
Background Controller: Continuously scans and enforces policies on existing cluster resources, not just new ones.
Admission Controller: Intercepts and validates requests to the Kubernetes API, ensuring policy compliance before resources are created or modified.
Reports Controller: Generates policy violation and compliance reports for resources in the cluster and can send to audit team.
Cleanup Controller: Automatically removes outdated resources based on defined policies and cleanup rules.
Background Controller: Continuously scans and enforces policies on existing cluster resources, not just new ones.
Kyverno Use-Cases:
Enforce specific images: If you want to ensure a particular image is used, Kyverno will enforce it.
Add specific labels: If you need to automatically add a specific label to your pod, Kyverno will handle it.
Enforce resource limits: If you want to set specific CPU and memory limits, Kyverno will ensure they are applied.
NOTE:
To check continues update of a pod use this command:
kubectl get pods --watch
Command to apply policy:
# -f flag here is 'file'
kubectl apply -f <name of the policy>
To check policy reports:
NOTE: Policy reports works on validation
kubectl get policy/reports
Kube Linter:
Kubelinter is a static analysis tool that checks Kubernetes YAML files and Helm charts to ensure the applications represented in them adhere to best practices and are free from misconfigurations. It's an open-source command-line interface that identifies potential issues in Kubernetes deployments before they are applied to a cluster.
Install Kubelinter: Download and install the latest release of Kubelinter.
Run Kubelinter: Use the
kube-linter lint
command to analyze your YAML files or Helm charts.Review Results: Examine the output to identify any issues and take corrective action.
kube-linter lint path/to/your/yaml/file.yaml
for more : https://github.com/stackrox/kube-linter
Kube-Bench (this topic is important for CKE certification)
Kube-bench is an open-source tool that assesses the security of a Kubernetes cluster by comparing it to the Center for Internet Security (CIS) Kubernetes benchmark:
In simple words it will secure you cluster
. By security what I mean is it will scan configuration files
and will give you issue along with solutions (Remediations master)
.
What it does: Kube-bench runs automated checks against the Kubernetes API server, etcd service, and worker nodes to verify that the cluster is configured securely.
How it workse: Kube-bench tests the cluster against best practices and configuration standards that are important for production environments. It provides detailed reports on the checks performed and recommendations for fixing any failed tests.
How to use it: You can run kube-bench by typing
kube-bench run
or run Kubernetes jobs to run kube-bench after a set amount of time.Features: Kube-bench is a Go application that uses YAML files to configure tests, making it easy to update the tool when test specifications change. You can also raise issues if kube-bench is not implementing the test correctly.
Limitations: Kube-bench cannot inspect the master nodes of managed clusters, such as GKE, EKS, AKS.
Kube Bench can be integrated into CI/CD pipelines to ensure that every new configuration or update to the Kubernetes cluster adheres to the CIS benchmarks. This can be automated using scripts or by incorporating Kube Bench into pipeline tools like Jenkins, CircleCI, or GitLab CI.
official repo: https://github.com/aquasecurity/kube-bench
What is a Pod?
Definition: A Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers that share the same network namespace, meaning they can communicate with each other using
localhost
.Purpose: Pods are used to run applications and services in a Kubernetes cluster. They provide an environment for containers to operate together.
Static Pods
Definition: Static Pods are managed directly by the kubelet on a specific node, rather than by the Kubernetes API server.'
Static Pods
Definition: Static Pods are managed directly by the kubelet on a specific node, rather than by the Kubernetes API server.
Automatically started by the kubelet when the node boots.
* Not managed by the Kubernetes control plane, meaning they won't be scheduled or rescheduled by Kubernetes.
Use Cases: Useful for running critical system components or when you need to ensure that a Pod runs on a specific node.
This is the path of static pod:
cd /etc/kubernetes/manifests
pwd
# this command will return
etc/kubernetes/manifests
# after doing ls command you can see 4 component of k8s
ls
etcd.yaml kube-controller-manager.yaml
kube-apiserver.yaml kube-scheduler.yaml
All these components are will run as static pod via kubelet
Where is kubelet
cd /var/lib
ls
or
ls | grep kubelet
Here in kubelet
there is file called config.yaml
and here staticpodpath
is defined as etc/kubernetes/manifests
NOTE:
Do you hands dirty practice and play with it to learn.
DaemonSet:
Daemon set is better or improved version of static pod
. Daemonset runs a pod in multiple nodes.
A DaemonSet is a Kubernetes resource that ensures that a copy of a specific Pod runs on all (or a subset of) nodes in a cluster.
- DaemonSets ensure that a specific Pod runs on all or selected nodes, managed by the Kubernetes control plane.
perfect for running background tasks such as log collection, monitoring, and other node-specific services.
Documentation: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Deployments Vs DaemenSet
DaemonSet: Ensures that a specific pod runs on all or some nodes in a cluster. DaemonSets are often used to deploy background programs that perform tasks like logging and monitoring. They distribute the workload evenly across the cluster, ensuring availability.
Deployment: Manages the number of pods and where they should be on nodes. Deployments are used to manage the rollout and rollback of application updates, providing fault tolerance and minimal downtime. Deployments can also use labels and other functions to select nodes to place replicas.
Here are some other differences between DaemonSets and Deployments:
External resources: DaemonSets don't need external resources like IP addresses or port numbers, but Deployments do.
Cluster size: DaemonSets don't need to know the number of nodes in the cluster, but Deployments do.
Runtime state information: DaemonSets don't need runtime state information, but Deployments do.
What is BackOff Algorithm in Kubernetes
The backoff algorithm in Kubernetes is a delay between restarts that's used to prevent a pod from being overwhelmed by failed container starts
Initial Delay: Retries start with a base delay (e.g., 10 seconds).
Incremental Backoff: The wait time increases after each failure (e.g., 10s, 20s, 30s).
Maximum Duration: Retries continue until a maximum duration of 600 seconds is reached.
What Happens at 600 Seconds?
Exhaustion of Retries: If the operation fails for 600 seconds, Kubernetes stops retrying.
Resulting State:
The pod being marked as
**CrashLoopBackOff**.
The job failing, and no further attempts being made to run it.
Notifications being sent out if configured to alert on such failures.
Blog on CrashLoopBackOff
https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/
Init Containers
Generally in production multiple containers are running which called multi-container pattern.
A pod can have Init Containers in addition to application containers. Init containers allow you to reorganize setup scripts and binding code.
An init container is the one that starts and executes before other containers in the same Pod. It’s meant to perform initialization logic for the main application hosted on the Pod.
Init containers are specialized containers that run before the main application containers
in a pod to perform initialization tasks. They are used(Use-Cases) for a variety of tasks, including:
Setting up configuration files
Initializing databases
Containing utilities or setup scripts
Reorganizing setup scripts and binding code
Limiting the attack surface
Delaying the start of the main containers
Generating configuration files
Use to fetch passwords etc
Distroless images are very important in-terms of security, so init containers pull the distroless images and then things are built on that.
Init containers are executed sequentially, and each one must complete successfully before the next one starts. They can be specified in the Pod specification alongside the containers array.
zero-code:
Each init container must be completed successfully with exit code zero to run the next init container if any is defined and after this the application container will run.
non-zero-code:
If any of the init containers return a non-zero exit code, kubelet restarts it. The application container will not run.
NOTE
: We can run multiple init containers.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app.kubernetes.io/name: MyApp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
kubectl apply -f my-app.yaml
kubectl get pods
# to see log of the particular container
kubectl logs <pod-name> -c <container-name>
NOTE: After init container finishes it’s work it dies.
Sidecar Containers:
A sidecar container is a secondary container that runs alongside the main application container in a Kubernetes pod to enhance its functionality
.
A sidecar is just a container that runs on the same Pod as the application container. It shares the same volume and network as the main container, it can “help” or enhance how the application operates.
A Sidecar container is a second container to add the pod. Why sidecar container added to the pod or main container? because it needs to use the same resources that use by the main container.
The main usecase of sidecar container is logging and monitoring
.
When did Sidcar containers came into existence- 1.29 beta
Can we run the application code and monitoring in one container?
Ans
: It's not recommended to run application code and monitoring in the same container because containers are designed to follow the single responsibility principle, meaning one container should run one process. Combining them violates this architecture, making management, scaling, resource allocation, and updates more difficult. Instead, use sidecar containers to handle auxiliary tasks like monitoring.Best Practice:
Instead, use sidecar containers within the same Pod. Sidecar containers allow you to run auxiliary services, like monitoring, logging, or network proxies, alongside your main application container, but still keep them decoupled and maintainable.
1. Actual Cycle of Pod, how the container runs:
kubectl
→ apiserver
→etcd
→ apiServer
→ Schedular
→ apiserver
→ kubelet
- Init Containers [Exit code 0]
What is exit code:
An exit code indicates the status of a container after it has finished running:
Exit code 0:
This means the container has completed its task successfully without any errors. It indicates a normal, successful termination.Non-zero exit codes
: These indicate errors or failures. The specific number can provide more details about the type of error.
Creates main container
Probes: Probes make sure that the application is healthy.
There are 3 types of Probes:
1) Liveness Probe: Checks if the container is alive and running. If it fails, Kubernetes restarts the container.
Readiness Probe: Checks if the container is ready to serve requests. If it fails, the container is removed from the service's endpoints.
Startup Probe: Ensures the container has started correctly. It is useful for containers that take a long time to initialize before accepting traffic.
5. Then traffic generates
And all this happen in just seconds like 5 or something like that.
Comparison of Sidecar Containers and Init Containers:
Sidecar Containers | Init Containers | |
Purpose | Provide supporting functionality to the main application | Perform initialization tasks before the main application containers start |
Lifecycle | Independent lifecycle, can be started, stopped, and restarted independently | Transient lifecycle, exists solely to complete assigned task during pod initialization |
Restart Policy | Supports restart policy (Always, OnFailure, Never) | Does not support restart policy |
Use Cases | Logging and monitoring, security operations, data synchronization | Configuration setup, database initialization, resource preparation |
Lifecycle of Pod and Container
Pod Lifecycle | Container Lifecycle |
Pending | Running |
Succeed | Waiting |
Failed | Terminated |
Running | —————— |
Unknown | —————— |
Termination of Pods
kubectl → API server → Controller Manager → Scheduler → Kubelet → SIGTERM + Grace Period → SIGKILL
kubectl issues a pod deletion request.
The API server receives the request and updates the pod's status to "Terminating."
Controller Manager adjusts the desired state and updates any related controllers (e.g., ReplicaSet).
The Scheduler no longer schedules new pods on the terminating node.
The Kubelet on the node where the pod is running receives the termination instruction.
Kubelet sends a SIGTERM signal to the pod’s containers, allowing graceful shutdown.
After the grace period (default 30s), if the containers are still running, SIGKILL is sent to forcefully terminate them.
What is a Runtime Class?
A Runtime Class is a resource that defines a set of properties for a container runtime.
It allows you to select the container runtime configuration for a Pod, ensuring that the containers run with the desired runtime environment.
Example of RuntimeClass
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
runtimeClassName: my-runtime-class
containers:
- name: my-container
image: my-image
he Pod my-pod
uses the my-runtime-class
RuntimeClass, which defines the runtime configuration for the containers in the Pod.
Official docs : https://kubernetes.io/docs/concepts/containers/runtime-class/
Subscribe to my newsletter
Read articles from Anas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by