Build Docker Images with Kaniko Inside Jenkins Deployed On Kubernetes
Table of contents
Building Docker images in a Local system with a simple Dockerfile is the Easiest Task for those who work with Docker and Kubernetes.
But the tricky thing is when we need to work on building Docker images inside Docker Containers. That's Nothing but DockerinDocker.
Most of We Are Mounting /var/run/docker.sock into the Base Image and Building the Docker image, But For Some Reason, We Are facing This error While Building Docker Images Inside Kubernetes pods:
Kaniko is a tool that enables building Docker images in a container without needing to run a Docker daemon. This makes it ideal for building images in environments where Docker is not installed or for building images inside a container.
Kaniko runs in a Docker container and has the single purpose of building and pushing a Docker image. This design means it’s easy for us to spin one up from within a Jenkins pipeline, running as many as we need.
Kaniko runs as a container and takes in three arguments: a Dockerfile, a build context, and the name of the registry to which it should push the final image.
What is Docker Daemon:
Docker daemon is A persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker daemon constantly listens for Docker API requests and processes them.
One thing about Docker is that we need to have root access to Interact with Docker Commands. Obviously, there is Risk while we worked with root access for any applications.
Kaniko- The Best Solution for DinD In K8s 1.24:
Kubernetes is easy to deploy and scale containerized applications. However, Kubernetes does not include a built-in way to build Docker images.
Previously, the common approach was to use a Docker-in-Docker (DinD) setup or to mount the Docker socket inside the container.
However, these approaches have some drawbacks, such as requiring privileged containers, potential security concerns, and difficulty in managing the Docker daemon inside the container.
In Kubernetes 1.24, the Kubernetes project has deprecated support for using the Docker daemon directly in a container.
Instead, the recommended approach is to use a tool like Kaniko for building Docker images. Kaniko allows us to build Docker images without requiring a Docker daemon or mounting the Docker socket inside the container.
This eliminates the need for privileged containers, reduces security concerns, and makes it easier to manage the container.
Let’s start Towards Creating Kaniko Pod to Build Docker Images:
Prerequisites Before Getting Started:
1. Kubernetes Environment with Docker Registry Secrets.
2. Docker Configurations File to mount a volume in /kaniko/.docker/ Path of the kaniko Container.
3. Docker Context with Dockerfile.
Understanding Docker Arguments:
Before Getting Your Hands Dirty With Kaniko, Let's Take an Overview of the Kaniko and Arguments Which make sense to work with Kaniko.
Dockerfile: Dockerfile is the File Containing all the Steps to be Run DUring the Time of Building the Image.
Destination: Destination is the docker registry where the built image should be pushed. This means Kaniko Build and Pushes the Image in a Single Line Of Command.
If you just want to build and not to be pushed to the registry, then you can also use –the no-push flag. Which will just Build the image, nothing More.
Build Context: Build context is nothing more than Normal Docker Context, the Directory where Docker builds the image.
Kaniko features a Bunch of Cloud Storages for docker context with --context Arguments. Supported Storages for Kaniko to Build the Docker images are :
Git Repository
Local Directory
S3 Bucket
GCS Bucket
Azure Blob Storage
Standard Input
Creating Secrets for AWS CLI in Jenkins Branch.
So, We Are Going to Push Our Image Built by Kaniko to the Private AWS ECR Repository we first need to Have Access to the AWS Secret and Access key with ECR Permissions.
Kaniko Accepts AWS and Secret Keys From Volume, Which We Mount During Pod Creation. So First Create AWS Credentials File.
Sample AWS Credentials File:
[default]
aws_access_key_id = AKXXXXXXXXXMQ
aws_secret_access_key = HdXXXXXXXXXXXXXXXXX458
Create Secrets From that file with this command:
kubectl create secret generic aws-secret --from-file=<path to Credentials file> -n jenkins
Docker Manages their Services with a config.json file inside ~/.docker/ Directory. So, Let’s Create Configmap For Docker Configurations Which Will Manage the Credentials Store For AWS ECR Registry.
{
"auths": {
"1234567890.dkr.ecr.ap-south-1.amazonaws.com": {},
"https://index.docker.io/v1/": {}
},
"credsStore": "ecr-login"
}
Use this Command to Create Configmap in jenkins NameSpace:
kubectl create configmap docker-config --from-file=<path to docker config.json file> -n jenkins
Now We Are Good to Go with the Docker Registry Credentials For AWS. For GCR or Docker registry, You may need to Do Some Changes config.json Of Docker.
Normally In Mac and Linux, the config.json is Stored in cat ~/.docker/config.json Path.
Going Forward Create Jenkinsfile with PodTemplate of Kaniko.
Set up Jenkinsfile With Kaniko Executor Image and Mount Volumes with AWS CLI Secret And Docker Registry config maps.
Here is the Pod.yaml That you should use to Create Pod inside Jenkinsfile.
Create Jenkins Job Stage with Environment Variable PATH = "/busybox:/kaniko:$PATH". The Reason behind this variable is to Help Kaniko to Pick their Context from the Current Directory inside the Pod Container.
Now Here is the Final Jenkinsfile:
Thank you so much for Reading the Article till the End! 🙌🏻 Your time and interest truly mean a lot 😁📃.
If you have any questions or thoughts about this blog, feel free to connect with me:
LinkedIn: https://www.linkedin.com/in/ravikyada
Twitter: https://twitter.com/ravijkyada
Until next time, Cheers to more learning and discovery✌🏻!
Subscribe to my newsletter
Read articles from Ravi Kyada directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Ravi Kyada
Ravi Kyada
DevOps Engineer working on Cloud Automation, CI/CD, and Monitoring. Day by Day Improving Cloud SkillSets and storing Hard coded Memories in the bucket. Passionate to Work Around Cloud, AWS, GCP, Jenkins, Docker, Kubernetes and Ansible.