Docker Basics: How to Begin Your Journey

Ms. BMs. B
5 min read

Welcome back to my blog post where I document my cloud journey, the highs, lows, and everything between. Today, I would be explaining to you what docker, containers and images are and how to set them up. We would also build our first container, write a docker file, create our first image and publish the image on docker hub.

WHAT IS DOCKER

Docker is an open platform for developing, shipping, and running applications. It enables one separate your applications from your infrastructure so you can deliver software quickly. You can manage your infrastructure the same way you manage your applications when using docker. With docker, you can significantly reduce the delay between writing code and running it in production. It's portability and lightweight nature makes it easy to dynamically manage workloads, scaling up or tearing down applications and services as business needs dictate, in near real time. Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security lets you run many containers simultaneously on a given host.

WHAT IS A CONTAINER

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state. When a container is removed, any changes to its state that aren't stored in persistent storage disappear.

Containers are lightweight and contain everything needed to run the application, so you don't need to rely on what's installed on the host. You can share containers while you work and be sure that everyone you share with gets the same container that works in the same way. Containers are great for continuous integration and continuous delivery (CI/CD) workflows.

Features of a container

  • Containers are portable. They can run anywhere! The container that runs on your development machine will work the same way in a data center or anywhere in the cloud!

  • It is self-contained. Each container has everything it needs to function with no reliance on any pre-installed dependencies on the host machine.

  • Containers are run in isolation, so they have minimal influence on the host and other containers, increasing the security of your applications.

  • Each container is independently managed. Deleting one container won't affect any others.

WHAT IS A CONTAINER IMAGE

Since a container is an isolated process, its files and configuration are gotten from the container images. A container image is a standardized package that includes all of the files, binaries, libraries, and configurations to run a container. An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.

To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.

Principles of images

  1. Images are immutable. Once an image is created, it can't be modified. You can only make a new image or add changes on top of it.

  2. Container images are composed of layers. Each layer represents a set of file system changes that add, remove, or modify files.

For this tutorial, I installed docker engine on Ubuntu. To do this, go to https://hub.docker.com/Ubuntu | Docker Docs.

  1. Uninstall any old versions or conflicting packages by pasting this command for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

  2. Set up Docker with this command #Add Docker's official GPG key:

    sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc

    #Add the repository to Apt sources:

    echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update

  3. Install Docker packages with the latest version using sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

  4. Verify installation is successful by running the Hello-World command with sudo docker run hello-world.

You have now successfully installed and started Docker Engine.

Creating a container

You can create a container in different ways:

a. You can create a basic container.

b. Create with a specific name.

c. Create with port mapping

d. Create a container with volume mounting etc

I created my container with a specific name and to do so, you run the command, docker run -d --name <container name> nginx. I ran docker run -d --name entreecontainer nginx

Building a docker file

To create a dockerfile, I created a directory mkdir demo, cd into the directory cd demo. Next, I vim dockerfile to start bulding my dockerfile.

Below is the structure of what I used to build my dockerfile.

Base image: FROM ubuntu

Set working directory: WORKDIR /app

Copy package files: COPY . /app

Copy application code: COPY . .

Maintainer information: LABEL maintainer=<email address>

Command to run: CMD ["echo", "welcome"]

Building a docker image from a docker file

After buliding our dockerfile, we need to build an image from the dockerfile and to do that, we would be running the command, docker build -t <image name> . Next, I want to push the image I just created to docker hub and to login from ubuntu, I ran the command docker login - -username <my username>. If you do not have an account, you can sign up via https://hub.docker.com/ and click on Create Repository.

Next, I tagged my image to the repository I created on dockerhub using the command docker tag <image name> username/repository name:latest

Pushing your image on docker hub

After tagging the image, I pushed it to docker hub by running docker push <username>/<repository name>

Finally, I created a container with port mapping using the image I had created by typing docker run -d -- name <container name> -p 8080:80 <image name>.

Like, share, comment and stay tuned for my next post.

3
Subscribe to my newsletter

Read articles from Ms. B directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ms. B
Ms. B

Hi, I'm a tech enthusiast who has decided to document her cloud journey as the day goes by. Stay tuned and follow me through this journey which I believe would be a wonderful experience. I'm also a team player who loves collaborating with others to create innovative solutions.