Untwist the world of Docker - P4
What are Dockerfiles?
The Dockerfiles provide a standardized way to define the application's dependencies and settings, making it easier to reproduce the environment across different environments. In other words, the Dockerfiles are the text files that contain instructions for building Docker images. These instructions specify the base image to use, along with additional layers that configure the application's environment. Basically, the dockerfiles are text files used to define the configuration of a Docker image. Docker is a platform that allows developers to package their applications and dependencies into containers, which are lightweight and portable environments that can run on any system that supports Docker. A Dockerfile contains various instructions for building a Docker image. And these instructions contains the commands to install various dependencies, copy files into the image, set environment variables, and configure the runtime environment. Once a Dockerfile is created, it can be used with the docker build
command to build an docker image. The dockerfiles are the kind of essential tools that are used for building and deploying applications into the docker containers.
Format of Dockerfile
FROM -> To pull image.
RUN -> To run command.
EXPOSE -> To open port.
COPY -> to сору file & directories from host to image.
ENV -> to set environment variable.
CMD -> specifies the command to run when a container is running using the docker image.
ENTRY POINT -> Specifies the command to run when a container is run from passed in image but allows additional argument to be passed in.
ADD → copies files from host to image, downloads zip/tar files from given link & extracts it automatically.
ARG → Defines variables that passed to container, while building image.
VOLUME→ create volume inside in container for sharing one container's volume to another.
WORKDIR -> to set working directory.
MAINTAINER → To set name email of author/user.
LABEL -> To add metadata.
USER -> To set user.
HEALTH CHECK →To specifies path for health check.
SHELL -> specifies shell to be used to run commands.
STOP SIGNAL→ specifies the signal to sent container when want to stop container gracefully.
Dockerfile Examples
Dockerfile for Simple Python Application:
# Use an existing Python image as a base FROM python:3.9-slim # Set the working directory inside the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed dependencies specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Run app.py when the container launches CMD ["python", "app.py"]
Explanation:
FROM python:3.9-slim
: This line of the dockerfile contains the base image to use, which is an existing Python 3.9 slim image.WORKDIR /app
: Sets the working directory inside the container to the location/app
.COPY . /app
: Copies the current directory's contents into the new/app
directory contained in the docker container.RUN pip install --no-cache-dir -r requirements.txt
: Installs dependencies specified inrequirements.txt
file.CMD ["python", "
app.py
"]
: Contains the commands to run when the docker container starts, which in this case is to executeapp.py
file using Python.
Dockerfile for Node.js Application with Build Stage:
# Stage 1: Build the application FROM node:14 AS build WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build # Stage 2: Create a lightweight image FROM nginx:alpine COPY --from=build /app/build /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]
Explanation: The above Dockerfile is divided into 2 steps and each step contains some commands such as:
a) Stage 1: It Builds the Node.js application.
FROM node:14 AS build
: Uses the Node.js 14 image as a base with an aliasbuild
.WORKDIR /app
: Sets the working directory.COPY package*.json ./
: Copies and installs dependencies.RUN npm install
: It also copies and installs dependencies.COPY . .
: Copies application code and builds it.RUN npm run build
: It also copies application code and builds it.b) Stage 2: Creates a lightweight docker image for serving the NodeJS application.
FROM nginx:alpine
: Uses the Nginx Alpine image.COPY --from=build /app/build /usr/share/nginx/html
: It copies the built application from the previous stage.EXPOSE 80
: Exposes the NodeJS application to port 80.CMD ["nginx", "-g", "daemon off;"]
: Runs Nginx in the beginning.
Dockerfile for Static Website:
# Use a lightweight base image FROM nginx:alpine # Copy static HTML files to the default Nginx web server directory COPY . /usr/share/nginx/html # Expose port 80 to allow outside access EXPOSE 80 # Command to start Nginx server in the foreground CMD ["nginx", "-g", "daemon off;"]
Explanation:
This Dockerfile uses a Nginx docker image based on Alpine Linux.
COPY . /usr/share/nginx/html
: it copies the static HTML files (e.g., index.html, CSS, JavaScript) to the default Nginx web server directory.EXPOSE 80
: Exposes to port 80 to allow outside access to the static website.CMD ["nginx", "-g", "daemon off;"]
: It starts the Nginx server in the beginning so that the Docker can manage its further process.
Dockerfile for Spring Boot Application:
FROM adoptopenjdk/openjdk11:alpine-jre WORKDIR /app COPY build/libs/*.jar app.jar EXPOSE 8080 CMD ["java", "-jar", "app.jar"]
Explanation:
Uses an OpenJDK image for the Java applications.
Then it copies the Spring Boot executable JAR files into the docker container.
Exposes to port 8080 for the Spring Boot application.
Dockerfile for PHP Application:
FROM php:8.0-apache COPY src/ /var/www/html/ EXPOSE 80
Explanation:
In this dockerfile it uses the official PHP Apache image.
Copies PHP application source files or codes into the docker container's web root directory.
Exposes to port 80 for serving HTTP traffic.
Dockerfile for Static Website with React:
FROM node:14 AS builder WORKDIR /app COPY . . RUN npm install RUN npm run build FROM nginx:latest COPY --from=builder /app/build/ /usr/share/nginx/html EXPOSE 80
Explanation:
This Dockerfile uses a multi-stage build for building a React application.
First stage builds the React application.
Second stage uses the Nginx image and copies the built static files into the Nginx's HTML directory.
Dockerfile for MySQL Database**:**
FROM mysql:latest ENV MYSQL_ROOT_PASSWORD=Vij@y@006 ENV MYSQL_DATABASE=mysqldb ENV MYSQL_USER=vijaysingh ENV MYSQL_PASSWORD=vijaysingh006 EXPOSE 3306
Explanation:
This above Dockerfile uses the official MySQL image.
Then, it sets the environment variables for configuring MySQL root password, database name, user, and password.
Exposes to port 3306 for MySQL connections.
Above are the few examples of Dockerfiles that can be used to carryout the various tasks. And these examples demonstrate how Dockerfiles are used to define the environment and dependencies for various types of applications.
Command for creating the Docker Image using the Dockerfile:
Command used for creating the Docker Image using the Dockerfile is:
$ docker build -t <image name> .
Command used for creating the Docker Image with some arguments using the Dockerfile is:
Suppose there is a scenario, where you created a OTT platform website like:- Netflix, Amazon Prime, etc. and you want to store some movies in the database of that website but on the temporary basis so that some movies will get visible when you open that website. So, for that you can you API and you can build Docker Image using the Dockerfile by passing a build argument in docker image building command like:
$ docker build --build-arg OMDB_V3_API_KEY=<your-API-key> -t <image name> .
Command for building a Docker Container using Docker Image:
$ docker run -d -p <port-on-host-machine-mapped-to>:<port-inside-container> <image name> //exmaple $ docker run -d -p 3000:3000 <image name>
Command for Tagging a Docker Image:
$ docker tag <image name>:latest <dockerhub-username>/<image name>:latest
Command for pushing the Docker Image to DockerHub:
$ docker push <dockerhub-username>/<image name>:latest
Command to view the logs of a running Docker Container:
$ docker logs 1b804a8a6e946f67a71e6dbd9c8f2f32f78594f11c37b33e71f8f4d50f55f18e Server running at http://localhost:3000/
Sample Output of building a Docker Image:
$ docker build -t myapp .
Sending build context to Docker daemon 4.82 kB
Step 1/5 : FROM node:14
---> 05310d03a1e8
Step 2/5 : WORKDIR /app
---> Using cache
---> 23d26d931e24
Step 3/5 : COPY package*.json ./
---> Using cache
---> 02a015052e5a
Step 4/5 : RUN npm install
---> Using cache
---> 7a5b46d1312c
Step 5/5 : COPY . .
---> Using cache
---> 5dbd36f1ef39
Successfully built 5dbd36f1ef39
Successfully tagged myapp:latest
Sample Output of containerizing the above created Docker Image and map it to the PORT 3000:
$ docker run -d -p 3000:3000 myapp
1b804a8a6e946f67a71e6dbd9c8f2f32f78594f11c37b33e71f8f4d50f55f18e
Sample Output to view running Docker Container:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cba9876zyxw myapp:latest "nginx -g 'daemon of…" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp myapp-container
Sample Output of tagging the Docker Image:
$ docker tag myapp:latest <dockerhub-username>/myapp:latest
Sample Output for pushing the Docker Image to DockerHub:
$ docker push <dockerhub-username>/myapp:latest
The push refers to repository [docker.io/username/myapp]
abcd1234efgh: Pushed
latest: digest: sha256:5678abcdefgh9101112ijklmnopqrstuv verified
Sample Output of Verifying whether a Docker Image pushed to DockerHub or not:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp latest abcd1234efgh 3 minutes ago 123MB
username/myapp latest abcd1234efgh 3 minutes ago 123MB
Sample Output of the above created myapp
Docker Container if you want to view the Logs of myapp
Docker Container: (optional)
$ docker logs 1b804a8a6e946f67a71e6dbd9c8f2f32f78594f11c37b33e71f8f4d50f55f18e
Server running at http://localhost:3000/
Subscribe to my newsletter
Read articles from Yash Varma directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Yash Varma
Yash Varma
As a final year undergraduate with a passion for Cloud DevOps Engineering, I bring a strong foundation in cloud computing and automation to the table. I have a solid understanding of cloud platforms such as AWS or Google Cloud and have honed my skills in infrastructure as code (IaC) using tools like Terraform and CloudFormation. I am eager to apply my knowledge of CI/CD pipelines, containerization, and version control systems to contribute to efficient and automated software development processes. I am a quick learner, highly motivated, and excited to embark on a career in Cloud & DevOps, where I can leverage my technical expertise to drive innovation and streamline operations. In addition to my proficiency in cloud technologies, I bring a wealth of experience in crafting efficient CI/CD pipelines, implementing containerization strategies, and managing version control systems. My eagerness to learn, coupled with my inherent motivation, fuels my drive to continually enhance processes and drive innovation in software development. I am poised to make a significant impact in Cloud, DevOps and DevSecOps leveraging my technical prowess to streamline operations and catalyze organizational growth. Well i am a part-time competitive Coder too, I have a solid understanding of Data Structure and Algorithms as well as of Object Oriented Programming. I usually code in C++ and have the familiarity with Java and Python as well as i also have the intermediate knowledge of Rust Programming language too. Some of my coding achievements are 5🌟 @HackerRank || 4🌟 @CodeChef(Max.Rating->1845) || Knight🔰 @LeetCode(Max.Rating->1858). Along with all these stuffs i have the understanding of Cloud Platforms like Amazon Web Services (AWS) and Google Cloud Platform (GCP). I was also a Google Cloud's Arcade Facilitator in 2k23 where i had mentored 150+ student's and professional's in gaining knowledge about the Google Cloud Platform (GCP). Moreover I had also won some of the Google Cloud's events like Google CRFP'22 || Google CCP'23.