How Docker Enhances Everyday Work Processes

Sanket BhalkeSanket Bhalke
9 min read

Introduction

"Containers That Empower Your Code: Discover Docker for Everyday DevOps!"

We’ve all heard it before: Docker is the secret weapon that makes apps run consistently from your laptop to production, eliminating those frustrating “it works on my machine” moments. But can Docker simplify day-to-day work beyond just deployments? Absolutely!

Docker has become essential for DevOps, not only for its efficiency and portability but also for how it allows us to create isolated environments on demand. Imagine needing a tool that’s hard to install on your current OS, or quickly setting up a specific environment to test a project — with Docker, you can start a container in seconds, run your tool or app, and get to work without the hassle of installations or compatibility issues.

In this edition, let’s explore practical Docker tricks that could make your daily tasks faster and smoother. From building containers for quick experiments to running complex tools that won’t install easily, Docker is here to simplify even the small stuff. Whether you're starting out or just looking for fresh ideas, these tips will help you unlock Docker’s full potential in your everyday workflows!

Docker Containers for Daily Use From Productivity to Play

1. Running Firefox in a Container:

Running Firefox in a Docker container provides several benefits, such as better isolation from the host system and preventing dependency conflicts with other applications. This setup allows for easy customization and quick deployment of pre-configured environments. With Docker, users can easily create multiple instances for testing while keeping security intact through sandboxing. Additionally, integrating Firefox into CI/CD pipelines simplifies automated testing and environment replication, making it a great solution for developers and teams aiming to improve their workflows.

 docker run -d \
  --name=firefox \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=America/New_York \
  -p 3000:3000 \
  -p 3001:3001 \
  --shm-size="2g" \
  lscr.io/linuxserver/firefox:latest
  • docker run: This command is used to create and start a new container from an image.

  • -d: This flag runs the container in detached mode, meaning it runs in the background.

  • --name=firefox: This option assigns the name "firefox" to the container, making it easier to manage and reference later (e.g., for stopping or removing the container).

  • -e PUID=1000: This environment variable sets the user ID (UID) for the process running inside the container. Setting this to 1000 is common for non-root users on many Linux distributions.

  • -e PGID=1000: Similar to PUID, this sets the group ID (GID) for the process inside the container. This allows the container's user to have the same permissions as the host user.

  • -e TZ=America/New_York: This sets the timezone for the container. Adjusting the timezone can help with time-related functions and logging.

  • -p 3000:3000: This maps port 3000 of the host machine to port 3000 of the container. This means any traffic sent to the host's port 3000 will be directed to the container's port 3000.

  • -p 3001:3001: Similar to the previous port mapping, this maps port 3001 of the host to port 3001 of the container.

  • --shm-size="2g": This option sets the size of the shared memory (shm) available to the container to 2 gigabytes. Increasing the shm size can help with applications that require more memory for shared memory segments (like graphical applications).

  • lscr.io/linuxserver/firefox:latest: This specifies the Docker image to use for creating the container. In this case, it pulls the latest version of the Firefox image from the LinuxServer.io repository. If the image is not present locally, Docker will download it from the specified repository.

Navigate to http://localhost:3000 and use Firefox

You can add the volume when running the Firefox container with the following command:

  -v /path/to/config:/config
  • Data Persistence: Any configuration or profile data stored in the volume stays safe, even if the Firefox container is deleted or recreated.

  • Profile Management: Easily manage and back up your Firefox profiles from the host, allowing for quick recovery or moving to other containers.

  • Isolation: Keeps your Firefox data separate from the container, improving security and making management easier.

  • Customization: Change configuration files directly on the host to adjust Firefox settings without needing to access the container.

  • Easy Upgrades: When updating or recreating the Firefox container, your settings and profiles remain unchanged in the mounted volume.

2. Running Obsidian in a Container:

This Docker command sets up the note-taking app Obsidian using the LinuxServer.io image. It creates a portable and isolated workspace to keep your notes secure. User and group IDs ensure correct file permissions, while shared memory size is set for optimal performance. Ports 3000 and 3001 are mapped for easy access, and the container automatically restarts unless you stop it manually. This setup is ideal for maintaining a clean development environment while enjoying Obsidian's powerful features!

docker run -d \
  --name=obsidian \
  --security-opt seccomp=unconfined \  # Optional, allows running without security restrictions
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -p 3000:3000 \
  -p 3001:3001 \
  --shm-size="1gb" \
  --restart unless-stopped \
  lscr.io/linuxserver/obsidian:latest

Breakdown of the Command

  • -d: Runs the container in detached mode, meaning it runs in the background.

  • --name=obsidian: Names the container "obsidian" for easy reference.

  • --security-opt seccomp=unconfined: (Optional) Allows the container to run with fewer security restrictions, which may be necessary for some applications but can expose the system to risks.

  • -e PUID=1000: Sets the user ID for the application to match the user on the host, ensuring proper file permissions.

  • -e PGID=1000: Sets the group ID for the application to match the group on the host, ensuring proper file permissions.

  • -e TZ=Etc/UTC: Sets the time zone for the container to UTC.

  • -p 3000:3000: Maps port 3000 on the host to port 3000 on the container, allowing access to the application via this port.

  • -p 3001:3001: Maps port 3001 on the host to port 3001 on the container.

  • --shm-size="1gb": Sets the shared memory size for the container to 1 GB, which can be important for applications that use a lot of shared memory (like GUI applications).

  • --restart unless-stopped: Configures the container to automatically restart unless it was manually stopped.

  • lscr.io/linuxserver/obsidian:latest: Specifies the Docker image to use.

3. Running Fabric AI in a Container:

Here’s what Fabric AI can do:

  • Ask questions: Get information from files, documents, or screenshots without opening them.

  • Search: Find relevant items from connected data.

  • Extract insights: Turn content into actionable insights and summaries.

  • Explore: Suggest similar items from connected data.

  • Create custom prompts: Address specific problems.

  • Integrate with other apps: Connect your favorite apps and cloud drives.

  • Use patterns: Apply specific use cases for AI tasks.

  • Interact with Fabric: Use command line, GUI, or voice commands.

  • Integrate with note-taking apps: Connect with note-taking applications like Obsidian.

Learn more about Fabric AI. Below is the Dockerfile we will use. You can modify it by changing the configurations in the Dockerfile.

# Use an official Go image with the correct version
FROM golang:1.23-alpine

# Set the working directory inside the container
WORKDIR /app

# Install necessary packages (e.g., Git)
RUN apk add --no-cache git

# Clone the Fabric repository
RUN git clone https://github.com/danielmiessler/fabric.git .

# Install Fabric using Go
RUN go install github.com/danielmiessler/fabric@latest

# Set environment variables for Go paths
ENV GOROOT=/usr/local/go
ENV GOPATH=/root/go
ENV PATH=$GOPATH/bin:$GOROOT/bin:/root/.local/bin:/usr/local/bin:$PATH

# Set the default command to run Fabric
CMD ["fabric", "-h"]

Building the Image:

# before running below command do cd to the location of the dockerfile

docker build -t image-name .

Run the image with the following command:

docker run --rm -it image-name fabric sh

Note: For running the Fabric AI you need to have at least one ai tool API (eg: OpenAI, Gemini, etc.) For free one you run OLLAMA

4. Running Kali Linux on the Container:

This Docker command starts a Kali Linux container from LinuxServer.io, which is great for penetration testing. It runs in detached mode, allowing it to operate in the background while keeping user permissions through environment variables. Port mappings make it easy to access services, and options like device access and shared memory boost performance, making it ideal for security professionals.

docker run -d \
  --name=kali-linux \
  --security-opt seccomp=unconfined \  # optional
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -e SUBFOLDER=/ \  # optional
  -e TITLE="Kali Linux" \  # optional
  -p 3000:3000 \
  -p 3001:3001 \
  --device /dev/dri:/dev/dri \  # optional
  --shm-size="1gb" \  # optional
  --restart unless-stopped \
  lscr.io/linuxserver/kali-linux:latest

Breakdown of the Command

  • -d: Run the container in detached mode (in the background).

  • --name=kali-linux: Assign a name to the container for easier management.

  • --security-opt seccomp=unconfined: Optionally, this allows the container to run without the default seccomp security profile.

  • Environment Variables:

    • -e PUID=1000: Sets the user ID for permissions.

    • -e PGID=1000: Sets the group ID for permissions.

    • -e TZ=Etc/UTC: Sets the time zone for the container.

    • -e SUBFOLDER=/: Optional environment variable that could specify a subfolder if required.

    • -e TITLE="Kali Linux": Optional title for the container.

  • Port Mapping:

    • -p 3000:3000: Maps port 3000 of the host to port 3000 of the container.

    • -p 3001:3001: Maps port 3001 of the host to port 3001 of the container.

  • Device Access:

    • --device /dev/dri:/dev/dri: Grants access to the Direct Rendering Infrastructure (DRI), useful for graphics.
  • --shm-size="1gb": Sets the size of shared memory (1 GB in this case).

  • --restart unless-stopped: Automatically restarts the container unless it has been stopped manually.

  • lscr.io/linuxserver/kali-linux:latest: Specifies the image to use for the container.

Let’s learn how to create a Docker network and connect the running Kali Linux container to it. To easily create the network using Docker, use the command below:

docker network create network-name

docker network connect network-name kali-linux

For creating an internal network for isolation purposes can use the —internal flag

5. Running DOOM Game on the Container:

After a long day of work, it's time to relax and enjoy some classic gaming! With Docker, you can easily set up and run the iconic DOOM game in a container, providing a perfect escape from the daily routine. This setup not only keeps your gaming environment separate but also ensures that your main system remains clutter-free.

# Use the latest Ubuntu base image
FROM ubuntu:24.10

# Update package lists and install necessary packages
RUN apt-get update && \
    apt-get install -y curl build-essential git && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# Download and install Zig
RUN curl -L https://ziglang.org/download/0.13.0/zig-linux-x86_64-0.13.0.tar.xz | \
    tar -xJ -C /usr/local/bin --strip-components=1

# Set the working directory for the application
WORKDIR /app

# Clone the Terminal Doom repository
RUN git clone https://github.com/cryptocode/terminal-doom.git

# Set the working directory to the cloned repository
WORKDIR /app/terminal-doom

# Build the project using Zig with optimization
RUN zig build -Doptimize=ReleaseFast

# Set the default command to run the compiled application
CMD ["./zig-out/bin/terminal-doom"]

Running the Dockerfile:

docker build -t image-name .

docker run -it image-name

Conclusion:

In this exploration of containerization, we showed how to set up and run various applications, including popular tools and games, using Docker. From Firefox and Obsidian to Kali Linux and the classic DOOM game, each container demonstrates Docker's flexibility and power in creating isolated environments for specific needs.

By embracing container technology, we enable ourselves to explore new applications and workflows while keeping a clean and organized development environment. Whether for productivity, security, or leisure, Docker gives us the tools to fully enjoy our computing experiences. Let's continue to push the boundaries of what we can achieve with containers!

0
Subscribe to my newsletter

Read articles from Sanket Bhalke directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sanket Bhalke
Sanket Bhalke

Welcome to our blog channel dedicated to all things DevOps! We provide insights on topics such as continuous integration and delivery, automation, infrastructure as code, cloud computing, containerization, monitoring, and more. Our mission is to empower developers, operations professionals, and IT experts to adopt DevOps practices and tools that enable them to work efficiently and innovate quickly. Our in-depth tutorials, best practices, industry trends, and real-world examples of successful DevOps implementations will help you stay up-to-date with the latest in this rapidly evolving field. Let's work together to improve the world of software development, and join us in our mission to build better, faster, and safer systems!