🚀 Day 15 of 30 Days DevOps Interview Preparation

Series: 30 Days DevOps Interview Preparation
Author: Tathagat Gaikwad
Docker Fundamentals: Install • Run Containers • Work with Images (with AWS Hands-On)
Docker is a must-have skill for DevOps, Cloud, and SRE roles. Today we’ll master:
Installing Docker (locally & on AWS EC2)
Running containers confidently
Building, tagging, pushing, and pulling images
Best practices (size, security, and performance)
20 interview questions with detailed answers
🧠 Theory First: What Docker Really Does
Containers vs VMs
VM: Full OS per app; heavy; slow to boot.
Container: Shares host kernel; lightweight; starts in seconds.
Image → Container
Image: A read-only blueprint (layers) of your app.
Container: A running instance of an image (ephemeral unless you use volumes).
Layers & Union FS
Docker images are built in layers. Each
RUN/COPY/ADD
adds a layer.Layer caching speeds up builds if earlier layers don’t change.
.dockerignore
helps keep images small and cache efficient.
Registries
Public: Docker Hub, GitHub Container Registry.
Private/Cloud: Amazon ECR, GCR, ACR.
You tag images (e.g.,
myapp:1.0
,myapp:latest
) and push/pull them.
EntryPoint vs CMD
ENTRYPOINT
: default executable.CMD
: default arguments (or the command if no entrypoint).Tip:
ENTRYPOINT ["executable"]
+CMD ["arg1","arg2"]
.
Networking
Default bridge network for isolated containers.
host network maps container directly to host stack (Linux only).
User-defined bridge lets containers resolve each other by name.
Volumes & Bind Mounts
Volumes (managed by Docker) are best for persistence.
Bind mounts map a host folder to a container path (great for local dev).
Healthchecks & Resource Limits
HEALTHCHECK
for auto-detecting unhealthy containers.--cpus
,--memory
,--pids-limit
prevent noisy neighbors.
Security Basics
Don’t run as root → create a non-root user in Dockerfile.
Scan images (e.g., Trivy), pin base images, keep them updated.
Prefer minimal/distroless images when possible.
☁️ AWS Hands-On: Docker on EC2 + Push to Amazon ECR
We’ll use Ubuntu or Amazon Linux 2023 on an EC2 instance. Security Group: open 22 (SSH) and 80/8000 for HTTP testing.
1) Launch EC2 (t3.micro is fine)
AMI: Ubuntu 22.04 or Amazon Linux 2023
SG inbound: 22, 80, 8000 (TCP) from your IP (or 0.0.0.0/0 for quick lab only)
2) Install Docker Engine
Ubuntu 22.04
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
newgrp docker
docker version
Amazon Linux 2023
sudo dnf update -y
sudo dnf install -y docker
sudo systemctl enable --now docker
sudo usermod -aG docker ec2-user
newgrp docker
docker version
If you see “permission denied” on
/var/run/docker.sock
, re-runnewgrp docker
or log out/in.
3) Sanity Checks: Hello World + NGINX
docker run --rm hello-world
docker run -d --name web -p 80:80 nginx:stable
curl -I localhost
Open your EC2 Public IP in a browser → you should see the NGINX page.
🧪 Build Your First App Image (Flask example)
Directory structure
flask-demo/
├─ app.py
├─ requirements.txt
└─ Dockerfile
from flask import Flask
app = Flask(__name__)
@app.get("/")
def home():
return "Hello from Flask on Docker!"
@app.get("/health")
def health():
return {"status": "ok"}
requirements.txt
flask
gunicorn
Dockerfile (small & production-friendly)
FROM python:3.11-slim
# Security: create non-root user
RUN useradd -m appuser
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
# Healthcheck & non-root
HEALTHCHECK CMD curl -f http://localhost:8000/health || exit 1
USER appuser
EXPOSE 8000
CMD ["gunicorn", "-b", "0.0.0.0:8000", "app:app"]
Build & run
docker build -t flask-demo:1.0 .
docker run -d --name flask-demo -p 8000:8000 --restart unless-stopped flask-demo:1.0
curl localhost:8000
🏷️ Tag, Login, Push to Amazon ECR
1) Create an ECR repository
aws ecr create-repository --repository-name flask-demo --region <your-region>
Note the output repositoryUri:<account_id>.dkr.ecr.<region>.
amazonaws.com/flask-demo
2) Authenticate Docker to ECR
aws ecr get-login-password --region <your-region> \
| docker login --username AWS --password-stdin <account_id>.dkr.ecr.<region>.amazonaws.com
3) Tag and push
docker tag flask-demo:1.0 <account_id>.dkr.ecr.<region>.amazonaws.com/flask-demo:1.0
docker push <account_id>.dkr.ecr.<region>.amazonaws.com/flask-demo:1.0
4) (Optional) Pull & run from ECR on any EC2
docker pull <account_id>.dkr.ecr.<region>.amazonaws.com/flask-demo:1.0
docker run -d -p 8000:8000 --name flask-demo --restart unless-stopped <account_id>.dkr.ecr.<region>.amazonaws.com/flask-demo:1.0
🧰 Handy Extras
Create & use a named volume
docker volume create appdata
docker run -d -p 8000:8000 -v appdata:/data flask-demo:1.0
User-defined bridge network
docker network create app-net
docker run -d --name redis --network app-net redis:7
docker run -d --name api --network app-net -p 8000:8000 flask-demo:1.0
# inside 'api' you can refer to 'redis' by name
Resource limits & restart policies
docker run -d --name api -p 8000:8000 --cpus=1 --memory=512m --restart unless-stopped flask-demo:1.0
Inspect, logs, exec
docker ps
docker logs -f flask-demo
docker exec -it flask-demo sh
docker inspect flask-demo | jq '.[0].NetworkSettings'
Cleanup
docker stop flask-demo web || true
docker rm flask-demo web || true
docker image prune -f
docker volume prune -f
🧪 Bonus: Multi-Stage Build (Node.js example)
Dockerfile
# Builder
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci --ignore-scripts
COPY . .
RUN npm run build
# Runtime (tiny)
FROM node:20-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/dist ./dist
COPY package*.json ./
RUN npm ci --omit=dev --ignore-scripts
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]
Multi-stage builds shrink images, improve security, and speed up pulls.
🎯 20 Docker Interview Questions (Detailed Answers)
1) Image vs Container?
Image is an immutable blueprint; Container is a runtime instance. Multiple containers can run from one image.
2) What happens when you run docker run hello-world
?
Docker checks local cache → pulls from Docker Hub if missing → creates a container → prints a success message and exits.
3) Why use .dockerignore
?
To exclude files (e.g., .git
, node_modules
, logs) from the build context, speeding builds and reducing image size.
4) ENTRYPOINT vs CMD?ENTRYPOINT
defines the executable; CMD
supplies default args. If you provide args to docker run
, they override CMD
, not ENTRYPOINT
.
5) How do you persist data in containers?
Use volumes (-v volname:/path
) for durability and portability; bind mounts only for local dev or specific host needs.
6) How to reduce image size?
Use slim/minimal base images, apk/apt
no-cache flags, remove build deps, multi-stage builds, and avoid copying unnecessary files.
7) How do you run a container in the background and map ports?docker run -d -p 80:80 nginx:stable
runs detached and exposes NGINX on host port 80.
8) What is an image tag and why is it important?
Tags (e.g., 1.2.3
, latest
) version your images. Pin versions in production to avoid surprise upgrades.
9) How do you push images to a private registry like ECR?
Login with aws ecr get-login-password | docker login
, tag with the repository URI, then docker push
.
10) What are Docker networks and why use user-defined ones?
Networks let containers talk securely. User-defined bridges give automatic DNS between containers and better isolation.
11) How do you limit resources for noisy containers?
Use --cpus
, --memory
, --pids-limit
to protect host stability.
12) How do you configure health checks?HEALTHCHECK CMD curl -f
http://localhost/health
|| exit 1
so orchestrators can restart unhealthy containers.
13) How do you troubleshoot “Cannot connect to the Docker daemon”?
Ensure Docker service is running, check socket permissions, add user to docker
group, confirm the host’s cgroups/driver status.
14) How do you exec into a running container?docker exec -it <container> sh
(or bash
if available).
15) How does caching work in Docker builds?
Each layer is cached; change early instructions (like COPY package.json
) invalidates subsequent layers. Order matters.
16) How do you handle secrets?
Never bake secrets into images. Use environment variables from a secret manager, or Docker/Swarm/K8s secrets.
17) Difference between docker stop
and docker kill
?stop
sends SIGTERM then SIGKILL (graceful). kill
sends SIGKILL immediately (forceful).
18) When would you use the host network?
For low-latency or network-intensive workloads on Linux when you can tolerate fewer isolation guarantees.
19) How do you keep containers running after SSH disconnect?
Use -d
(detached), or run under systemd, or a process supervisor. Avoid running foreground processes in screen/tmux for prod.
20) How do you scan Docker images?
Use tools like Trivy/Grype or ECR scanning in CI to detect vulnerabilities. Fail builds on high/critical CVEs.
✅ Key Takeaways
Docker images are layered; containers are fast, isolated runtime instances.
On AWS, install Docker on EC2, push images to ECR, and run anywhere.
Keep images small, secure, and versioned; use health checks and resource limits.
Practice the commands until they’re muscle memory.
Subscribe to my newsletter
Read articles from Tathagat Gaikwad directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
