Docker 101: A Day in the Life of “It Works on My Machine”


Every developer has faced a situation like this: you just built the coolest microservice on your machine, when pushing to staging, and only to hear from QA or OPs that nothing works - dependencies are missing, ports clash, the API isn’t working, and the app crashes. Cue the all‑too‑familiar blame game: “Works on my machine!”
Docker is the hero here. In a world of inconsistent environments and heavyweight virtual machines, Docker lets you package your code and its entire environment into a portable, lightweight container. No more “But it ran fine here” - just guaranteed consistency from your machine to production.
In this article, we’ll cover:
What Docker is and why it matters
Core Docker fundamentals
Essential Dockerfile anatomy and commands
Hands‑on Java and Python examples
Common “Why not develop in Docker?” pain points
Next steps: Docker Compose, CI/CD pipelines, security scanning, and Kubernetes
🐳 What Is Docker?
At its core, Docker is a platform for containerization. A container is like a lightweight, self‑contained virtual machine - but instead of bundling a full OS, it shares the host’s kernel and isolates only what your app needs. You write a small text file (the Dockerfile) that describes your environment, build an image from it, and then run containers from that image.
Why Docker?
🔒 Consistency & Isolation
Docker containers bundle your application and all its dependencies, libraries, environment variables, and configurations into a single image. Whether you run it on Windows, macOS, or Linux servers, it ensures identical behavior everywhere.
⚡️ Efficient Resource Usage
You get isolated processes without the overhead of multiple guest operating systems, so you can spin up dozens of containers easily.
📦 Portability & Scalability
Build once, run anywhere. A Docker image can run on your laptop or on a cloud VM. And using orchestrators (Kubernetes, Docker Swarm), you can scale up and scale down instantly based on the load.
🧩 Microservices Ready
Docker makes it simple: package each microservice in its own container, connect them with networks, and manage them independently.
⚙️ Docker Fundamentals
Image vs. Container
Image: A read‑only snapshot of your application and its filesystem (dependencies, config, code).
Container: a running instance of that image, with its own CPU, memory, and network namespace.
Layering & Caching
Each
RUN
,COPY
, orADD
in your Dockerfile, creates a new layer.Layers are cached: if nothing changes in a step, Docker reuses the previous layer - dramatically speeding up rebuilds.
Registries & Image Distribution
Registries are the public/private hub where you can pull images or push your own.
Public: Docker Hub (official and community images)
Private: AWS ECR, Google Container Registry, self‑hosted (for internal apps)
Networking Modes
bridge (default): Containers talk over a private virtual subnet and map ports to the host.
host: Container shares the host’s network namespace (no port mapping).
overlay: Multi-host networks (used by Swarm or Kubernetes) for multi-host clustering.
Storage: Volumes & Bind Mounts
Volumes: Managed by Docker, stored outside the container’s writable layer-ideal for persisting databases, logs, etc.
Bind Mounts: Map host directories into containers (e.g.
-v ./src:/app
) for live code edits.
Environment Variables & Secrets
Pass configs at runtime:
docker run -e API_KEY=…
.For sensitive data, use Docker Secrets or external secret stores (Vault, AWS Secrets Manager) instead of adding creds into images.
Healthchecks & Metadata
In your Dockerfile, add:
HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:8080/health || exit 1
Allows orchestrators to know when a container is “ready” or needs restarting. Check status with
docker ps
.
🛠 Anatomy of a Dockerfile
The Dockerfile is your formula for building images. Here’s its skeleton:
# 1) Base image pick a slim, secure OS/runtime
FROM <base-image>:<tag>
# 2) (Optional) Create a non-root user for security
RUN groupadd -r app && useradd -r -g app app
# 3) Set your working directory
WORKDIR /app
# 4) Copy code or artifacts into the image
COPY . .
# 5) Install or build (if needed)
RUN apk add --no-cache curl
# 6) Document which port your app listens on
EXPOSE <port>
# 7) Define startup command
ENTRYPOINT ["<executable>", "arg1", "arg2"]
CMD ["optional", "default", "args"]
FROM: Your starting point (e.g.,
alpine:latest
,openjdk:17-jdk-slim
,golang:1.20-alpine
).WORKDIR & COPY: Place files in the container.
RUN: Execute any command inside the image (e.g., install packages, compile code, or run scripts).
EXPOSE: Documents the port that the application listens on.
ENTRYPOINT/CMD: Defines the container’s startup command.
💡 Essential Docker Commands
Command | Purpose |
docker build -t myapp:1.0 . | Build an image named myapp:1.0 from the Dockerfile in . |
docker images | List all images on your machine |
docker run -d -p 8080:8080 myapp:1.0 | Run the container detached, mapping host port 8080 → container 8080 |
docker ps | List running containers |
docker logs <container> | Show logs of a running container |
docker exec -it <container> /bin/sh | Open a shell inside a live container |
docker stop <container> | Stop a running container |
docker rm <container> | Remove a stopped container |
docker rmi <image> | Delete an image from local storage |
docker push myrepo/myapp:1.0 | Push your image to a registry |
⛓️ Docker Lifecycle
Install Docker Engine
Install Docker using the official guide - https://docs.docker.com/engine/install/.
Verify with
docker version
anddocker info
.
Configure & Start the Daemon
On Linux: enable and start the
docker
service (systemctl enable --now docker
).On Mac/Windows: launch Docker Desktop.
Create Docker Image
Write a
Dockerfile
.Build with
docker build -t myapp:latest .
Manage Your Images
List (
docker images
), tag (docker tag
), remove (docker rmi
) locally.Push/pull to a registry (
docker push
/docker pull
).
Run Containers
Create a container from an image:
docker run --name myapp -d -p 80:80 myapp:latest
Inspect (
docker ps
,docker logs
,docker inspect
).Interact (
docker exec -it myapp /bin/sh
).
Lifecycle Commands
Start/Stop/Restart:
docker start myapp
/docker stop myapp
/docker restart myapp
.Pause/Unpause:
docker pause myapp
/docker unpause myapp
.Remove:
docker rm myapp
; cleanup unused images withdocker image prune
.
Update & Iterate
Modify your app or
Dockerfile
.Rebuild (
docker build
) and redeploy (docker stop
+docker rm
+docker run
).Use version tags (e.g.
myapp:v1.1
) to keep track of releases.
Cleanup & Maintenance
Remove dangling images, stopped containers, and unused volumes:
docker system prune --all --volumes
Monitor disk usage with
docker system df
.
☕ Example 1: Java Spring Boot JAR
Project Setup: Build the jar
@RestController public class HelloController { @GetMapping("/hello") public String hello() { return "Hello from Spring Boot in Docker!"; } }
Write the Dockerfile
# Base image FROM eclipse-temurin:17-jre-alpine # Set your working directory WORKDIR /app # Copy code or artifacts into the image COPY target/docker-demo-0.0.1-SNAPSHOT.jar app.jar # Document which port your app listens on EXPOSE 8080 # start your app ENTRYPOINT ["java", "-jar", "app.jar"]
Build & Run
docker build -t docker-demo-java:1.0 . docker run -d -p 8080:8080 docker-demo-java:1.0
Verify
curl http://localhost:8080/hello
💻 Example 2: Python Service
Project Setup: Build the Python service
# app.py from flask import Flask app = Flask(__name__) @app.route("/py-sample") def hello(): return "Hello from Python in Docker!" if __name__ == "__main__": # Listen on all interfaces so Docker can route traffic app.run(host="0.0.0.0", port=8080)
# requirements.txt Flask==2.3.2
Write the Dockerfile
#Base image #Mention the Python language version FROM python:3.11-slim WORKDIR /app # Add requirements.txt and download dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy all files COPY . . # Document which port your app listens on EXPOSE 8080 # start your app CMD ["python", "app.py"]
Build & Run
docker build -t docker-demo-python:1.0 . docker run -d -p 8080:8080 docker-demo-python:1.0
Verify:
curl http://localhost:8080/py-sample
🤷♂️ Why Don’t We Always Develop in Docker?
Every Docker tutorial starts by showing how Docker enforces the same environment everywhere. Yet many teams still run code natively on their laptops and only switch to containers at deploy time. Here’s why:
Speed & Iteration - Rebuilding containers for every code change can be slower than native builds.
Tooling & Debugging - IDEs, language servers, debuggers, hot‑reload engines, and system profilers tend to integrate more seamlessly with a local install. Container filesystems and permissions can complicate the process.
Resource Constraints - If you’re already juggling JVMs, local databases, browser instances, and heavy IDEs, adding Docker’s overlay filesystem and extra processes can push older machines to their limits.
But Docker still matters!
Even if you skip it on your dev box, Docker buys you consistency and reproducibility in critical environments:
Rolling back & forensics - When a container crashes in production, you can pull that very image locally (with all its dependencies baked in) and reproduce the issue.
Onboarding & documentation - New engineers can get everything running with a single
docker-compose up
rather than working with fifty manual install steps.
🗺️ Your Next Adventure
Congratulations - you’ve containerized a simple app and mastered the basics of Docker! Here are a few next steps:
Docker Compose - Managing multiple
docker run
commands for services like databases and caches can get messy. Docker Compose lets you define multi‑container stacks with a singledocker‑compose.yml
file.CI/CD Pipelines (e.g., GitHub Actions) - Automating your workflow ensures that every commit builds your image, runs tests, and pushes to your registry.
Security Scanning (Trivy) - Scanning with tools like Trivy helps you spot and fix risks before they reach production.
Non‑Root Users - Creating and using a dedicated non‑root user in your Dockerfile reduces the blast radius if an attacker escapes the container.
Kubernetes (Orchestrators) - For production workloads, you need automatic scaling, self‑healing, rolling updates, and service discovery. Kubernetes (or Docker Swarm) takes your individual containers and manages them as a resilient, clustered application.
Conclusion
Docker turns the “it works here” problem into “it works everywhere.” By packaging your apps in containers, you gain consistency, efficiency, and scalability - whether you’re running a simple script or an enterprise microservices architecture.
Do comment below with your favorite Docker tip, also share your Docker journeys, and link your sample repos too 🚀
Subscribe to my newsletter
Read articles from Rahul R directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Rahul R
Rahul R
🚀 Software Development Engineer | 🛠️ Microservices Enthusiast | 🤖 AI & ML Explorer As a founding engineer at a fast-paced startup, I’ve been building the future of scalable APIs and microservices - From turning complex workflows into simple solutions to optimizing performance. Let’s dive into the world of APIs and tech innovation, one post at a time! 🌟 👋 Let’s connect on LinkedIn https://www.linkedin.com/in/rahul-r-raghunathan/