Chapter - 4

PIYUSH SHARMAPIYUSH SHARMA
5 min read

Mastering Git and Docker: Essential Commands and Key Differences

In today's fast-paced software development landscape, mastering tools like Git and Docker is crucial. These technologies not only streamline the development process but also ensure consistent application deployment. In this blog, we'll explore essential Git commands for version control, key Docker commands for container management, and a comparison of Docker and AWS S3 to help you understand their distinct roles in application development.

Essential Git Commands

Here are some of the most important Git commands to help you manage your code effectively:

  • git init: Initializes a new Git repository in the current directory, creating a hidden .git folder to store all the version control information.

  • git status: Shows the current state of the working directory and staging area, indicating which files are tracked, modified, or untracked.

  • git add .: Adds all files in the current directory and its subdirectories to the staging area, preparing them for commit.

  • git commit -m "message": Creates a new commit with the staged changes and adds a descriptive message, recording the current state of the project in the repository's history.

  • git log: Displays a list of all commits in the current branch, showing commit hashes, authors, dates, and commit messages.

  • git checkout -b branch-name: Creates a new branch with the specified name and immediately switches to it. This is a shortcut for git branch followed by git checkout.

  • git merge branch-name: Integrates changes from the specified branch into the current branch, typically used to combine feature branches back into the main branch.

  • git branch -d branch-name: Deletes the specified branch after it has been merged, helping keep the repository clean by removing unnecessary branches.

These commands cover the basic Git workflow, including repository initialization, staging changes, committing, branching, merging, and branch management.

Key Docker Commands

Here’s a collection of essential Docker commands for effective container management:

  • docker run nginx: Creates and starts a new container running the Nginx web server using the latest Nginx image.

  • docker run -p 8080:80 nginx: Runs an Nginx container, mapping port 8080 on the host to port 80 in the container, allowing access to the web server from the host machine.

  • docker ps: Lists all currently running Docker containers, showing their IDs, images, names, and status.

  • docker ps -a: Shows all containers, including those that are stopped or exited.

  • ufw allow 8080: Configures the Uncomplicated Firewall (UFW) to allow incoming traffic on port 8080.

  • docker stop <Container ID>: Stops a running container using its ID or the first few characters of the ID.

  • docker run -d nginx: Runs an Nginx container in detached mode, allowing it to run in the background.

  • docker rm CID: Removes a stopped container using its ID or name.

  • docker rm -f CID/name: Forcefully removes a running container without stopping it first.

  • docker start CID/name: Starts a stopped container using its ID or name.

  • docker cp this.png <Container ID>:th.txt: Copies a file from the host to a specified location in the container.

  • docker exec -it <Container ID> bash: Starts an interactive bash shell inside a running container.

  • docker cp <Container ID>:mine.txt this.uu: Copies a file from the container to the host.

  • docker commit container: Creates a new image from a container's changes.

  • docker images / docker image ls: Lists all locally stored Docker images.

  • docker system prune: Removes all stopped containers, unused networks, dangling images, and build cache.

These commands cover various aspects of Docker container management, including running, stopping, removing containers, copying files, executing commands inside containers, and managing images.

Docker vs. AWS S3: Key Differences

Docker and AWS S3 serve distinct purposes in the realm of software development and cloud services. Understanding their differences is crucial for effectively utilizing each technology.

Overview of Docker

  • Containerization Platform: Docker is primarily a containerization platform that allows developers to package applications along with their dependencies into containers. This ensures that the application runs consistently across different environments.

  • Storage Mechanism: Within Docker, data can be stored using volumes or bind mounts. Volumes are managed by Docker and persist data even after containers are stopped, while bind mounts allow access to specific directories on the host machine.

Overview of AWS S3

  • Object Storage Service: Amazon S3 (Simple Storage Service) is a scalable object storage service designed for storing and retrieving any amount of data from anywhere on the web. It is commonly used for backup, archiving, and serving static content like images and videos.

  • Data Access: S3 provides a RESTful API for accessing stored objects, making it suitable for web applications that require scalable storage solutions.

Key Differences

FeatureDockerAWS S3
PurposeContainerization and application deploymentObject storage for data management
Data TypeFilesystem data within containersObjects (files) stored in buckets
PersistenceVolumes persist data; bind mounts link to host filesData persists independently of applications
Access MethodLocal filesystem access within containersHTTP-based API access
ScalabilityLimited by host resourcesHighly scalable, virtually unlimited
Use CasesApplication deployment, microservicesBackup, static website hosting, data lakes

Conclusion

While both Docker and AWS S3 can handle data storage, they serve fundamentally different roles. Docker focuses on packaging and running applications in isolated environments, whereas AWS S3 is geared towards scalable object storage. Therefore, they are not interchangeable; rather, they can complement each other in modern cloud-native architectures. For instance, an application running in a Docker container might utilize AWS S3 for storing user uploads or backups while maintaining its operational data within Docker volumes.

Further Learning

For more on Docker, check out the Docker Crash Course.


Let’s build something amazing together!

Connect with Me


Tags

#Git #Docker #DevOps #CloudComputing #AWS #Containerization #WebDevelopment #Programming #SoftwareEngineering #Tutorial

0
Subscribe to my newsletter

Read articles from PIYUSH SHARMA directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

PIYUSH SHARMA
PIYUSH SHARMA

"Passionate DevOps enthusiast, automating workflows and optimizing infrastructure for a more efficient, scalable future."