#day17-Unveiling Docker's Magic🧙‍♂️: Crafting Docker files for Seamless Application Deployment🎁(Part-2).

Sneha FalmariSneha Falmari
7 min read

🗃Dockerfile:

A Dockerfile is a text-based script used to define the configuration of a Docker image. It contains a series of instructions to assemble the image, such as installing packages, copying files, setting environment variables, and more.

(。・∀・)ノ゙Docker Compose:

Docker Compose is a tool used to define and manage multi-container Docker applications. It allows you to define services, networks, and volumes in a docker-compose.yml file, making it easier to manage complex applications.

🔐Benefits of Docker Compose:

  1. Simplified Management: Docker Compose simplifies the management of multi-container applications by defining their relationships and configurations in a single file.

  2. Easy Setup: With a single command (docker-compose up), you can create and start all the services defined in the docker-compose.yml file.

  3. Isolation: Each service in Docker Compose runs in its own container, providing isolation and avoiding conflicts.

  4. Reproducibility: The docker-compose.yml file captures the entire application's configuration, making it easy to reproduce the environment on different systems.

  5. Scalability: Docker Compose allows you to scale services up or down easily based on the requirements of your application.

⚡Docker Volume:

A Docker volume is a way to persist and share data between Docker containers. It allows data to be stored outside the container, ensuring that data remains intact even if the container is removed.

🥽Docker Swarm and Its Benefits:

Docker Swarm is a native clustering and orchestration solution for Docker. It allows you to create and manage a swarm of Docker nodes, making it easy to deploy and manage containerized applications at scale.

🎀Benefits of Docker Swarm:

  1. Scalability: Docker Swarm enables easy scaling of applications by distributing containers across multiple nodes.

  2. High Availability: It provides automatic load balancing and failover, ensuring that applications remain available even if some nodes fail.

  3. Service Discovery: Docker Swarm includes built-in service discovery, making it simple for containers to communicate with each other.

  4. Security: Docker Swarm provides security features such as encrypted communication between nodes and mutual TLS authentication.

🌈Docker Network:

Docker networks enable communication between containers running on the same host or across different hosts. They provide isolation, IP management, and better control over container communication.

🙌Tasks:

❄Use the docker top command📢 to view the processes running inside a container🚛.

  1. First, you need to know the name or ID of the container for which you want to view the running processes. You can find this information by running docker ps in the terminal. This command will list all the running containers along with their names or IDs.

  2. Open a terminal or command prompt on your computer where Docker is installed and operational.

  3. Use the following syntax to run the docker top command:

     docker top <container_name_or_id>
    

    Replace <container_name_or_id> with the actual name or ID of the container you want to inspect.

    For example, if you have a container named "my_container," you can use the following command to view its running processes:

     docker top my_container
    
  4. After executing the command, Docker will display a table with information about the processes running inside the specified container. This table will include columns like UID, PID, PPID, C, STIME, TTY, TIME, and CMD, which provides details about the processes' user, process ID, parent process ID, CPU usage, start time, terminal, execution time, and command.

❄Use the docker save command to save✔ an image to a tar archive⏬.

Before you begin, you need to know the name or ID of the Docker image you want to save. This command will list all the images available on your system.

  1. Open a terminal or command prompt on your computer where Docker is installed and operational. Use the following syntax to run the docker save command:

     docker save -o <output_filename.tar> <image_name>
    
  2. Replace <output_filename.tar> with the desired name for the output tar archive file and <image_name> with the name or ID of the Docker image you want to save.

    For example, if you want to save an image named "my_image" to a file named "my_image_archive.tar," your command would be:

     docker save -o my_image_archive.tar my_image
    
  3. Once you execute the command, Docker will start packaging the image's layers and metadata into the specified tar archive file. The time it takes will depend on the size and complexity of the image.

  4. After the command completes, you should see the tar archive file in the same directory where you executed the command.

  5. You can now transport, share, or store the generated tar archive file as needed. This archive can be loaded onto another system using the docker load command.

❄Create a Dockerfile for a simple web application (e.g. a Node.js or Python app)📂

# Use an official Node.js runtime as the base image
FROM node:14

# Set the working directory inside the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the container
COPY package*.json ./

# Install application dependencies
RUN npm install

# Copy the rest of the application's source code
COPY . .

# Expose port 3000 for the application
EXPOSE 3000

# Define the command to start the application
CMD ["npm", "start"]

⚡Explanation of the Dockerfile Steps:

  1. FROM node:14: This sets the base image to an official Node.js image with version 14. This provides the runtime environment for your application.

  2. WORKDIR /usr/src/app: This sets the working directory inside the container where the application code will be placed.

  3. COPY package*.json ./: This copies the package.json and package-lock.json files from your local directory to the container's working directory.

  4. RUN npm install: This installs the dependencies specified in the package.json file inside the container.

  5. COPY. .: This copies the rest of your application's source code to the container's working directory.

  6. EXPOSE 3000: This indicates that the application running inside the container will listen on port 3000. Note that this does not publish the port to the host by default; it's a hint for documentation.

  7. CMD ["npm", "start"]: This specifies the command to start your application. In this case, it starts the application using npm start.

🍂Steps to Use the Dockerfile:

  1. Create a file named Dockerfile in your application's root directory.

  2. Copy the Dockerfile content provided above into the Dockerfile.

  3. Open a terminal in the same directory as the Dockerfile.

  4. Build the Docker image using the command: docker build -t my-node-app . (Replace my-node-app with your desired image name).

  5. Run a container using the command: docker run -p 8080:3000 -d my-node-app (This maps port 3000 from the container to port 8080 on your local machine).

  6. Access your Node.js application in a web browser by going to http://localhost:8080.

Remember that you can modify this Dockerfile to suit other programming languages or frameworks, such as Python. The key is to adjust the installation commands and application start commands according to your app's requirements.

❄Build the image using the Dockerfile and run the container🧩.

  1. Begin by creating a Dockerfile in your project directory. The Dockerfile contains instructions to build the image. Here's an example of a Dockerfile for a Node.js web application:

     # Use an official Node.js runtime as the base image
     FROM node:14
    
     # Set the working directory inside the container
     WORKDIR /usr/src/app
    
     # Copy package.json and package-lock.json to the container
     COPY package*.json ./
    
     # Install application dependencies
     RUN npm install
    
     # Copy the rest of the application's source code
     COPY . .
    
     # Expose port 3000 for the application
     EXPOSE 3000
    
     # Define the command to start the application
     CMD ["npm", "start"]
    
  2. Open a terminal or command prompt on your computer.

  3. Use the cd command to navigate to the directory where your Dockerfile and application code are located.

  4. Run the following command to build the Docker image using the Dockerfile:

     docker build -t my-node-app .
    

    Replace my-node-app with your desired image name.

  5. Once the image is built, you can run a container based on that image. Use the following command:

     docker run -p 8080:3000 -d my-node-app
    

    This command maps port 3000 from the container to port 8080 on your local machine. The -d flag detaches the container, allowing it to run in the background.

  6. Open a web browser and navigate to http://localhost:8080 to access your Node.js web application running in the Docker container.

  7. If you want to stop and remove the container, you can use the following commands:

     docker stop <container_id>
     docker rm <container_id>
    

    Replace <container_id> with the actual ID of the running container.

🪐Conclusion🪐:

Docker has ushered in a new era of development and deployment, transforming the way we think about application environments. By embracing Docker's core concepts and tools, developers and DevOps teams can break down silos, accelerate workflows, and ensure consistency from development to production.

Happy Coding!🎀🎀

0
Subscribe to my newsletter

Read articles from Sneha Falmari directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sneha Falmari
Sneha Falmari