Devops For Begineers.


Before getting to know about DevOps, we should know about SDLC, the Software Development Life Cycle.
Mainly, there are 2 types of SDLC: 1. Waterfall 2. Agile
In the Waterfall model, each phase must be completed before the next phase begins. Because of this, it sometimes becomes difficult to go back and make changes, as it takes too much time.
In the Agile model, we can work simultaneously on every phase, which helps us finish the work in less time.
Steps to develop a software product are:
Requirement gathering.
Planning.
Designing.
Development.
Testing.
Deployment.
Before, in the IT industry, there was a separate development team where developers used to write the code, and the operations team checked the code, called testing. Sometimes the testing team received late codes, or sometimes the code might have bugs, so it was full of confusion and also becoming late to develop a software product. That is where DevOps came into the picture, where both the development team and operations team work together, which helped to reduce errors and took less time.
DevOps-
DevOps combines both development and operations to automate the SDLC. It mainly focuses on automating, continuous integration (CI), continuous deployment (CD), and security to improve deployment speed with fewer human errors and time-saving.
CI - where developers frequently integrate their code into a shared repository. The goal is to detect early issues and improve quality through automated testing and building processes.
CD - it is the extension of CI where every change that has passed the testing pipeline is automatically deployed to production without manual intervention.
Core principles of DevOps are:
Automation - Automating repetitive tasks to ensure high efficiency and fewer errors.
CI/CD - Ensuring code changes are automatically tested and deployed to production.
Collaboration and communication - Ensuring communication between developers and the operations team.
Infrastructure as code - Managing infrastructure using code and software techniques.
Benefits of using DevOps:
Faster time to market, improved collaboration between teams, improved software quality, increased deployment frequency, high scalability and flexibility, and reduced costs.
CI/CD Pipeline - source code>build>test>deploy stage>monitor stage.
DevOps tools:
Git - Distributed version control system.
Jenkins - CI/CD pipeline to automate build, test, and deploy. (Java)
Docker - Build and package applications in lightweight and portable containers.
Ansible - Configuration management tool. It is agentless and eliminates repetitive tasks. (Python)
Kubernetes - Manages containerized applications.
Maven - Manages dependencies. (Java JDK)
Terraform - IaC to automate creation and management of cloud resources.
Git
It is a distributed version control system used to track changes in the source code and enables multiple developers to collaborate with each other simultaneously.
Why Git:
Track changes in the code.
Allows multiple developers to collaborate.
Provides rollback to previous versions.
Helps in branching and merging code.
Git workflow:
Modify files - make changes to your code.
Stage files - git add <file>
to move to the staging area.
Commit changes - git commit -m "msg"
to save changes locally.
Push to repo - git push origin branch
to upload to the remote repo.
Pull updates - git pull origin main
to get the latest updates.
Imp components:
Repository - it is a storage location where all the files and their history are stored. There are 2 types of repositories:
local repo - exists on your computer.
remote repo - stored on a server (e.g., GitHub)
working directory - the actual folder where your files are present and modified.
staging area - a temporary storage where files are kept before committing.
commit - a snapshot of changes in the repository.
branches - it allows you to work on different versions of the code without affecting the main project (e.g., main/master).
merging - it combines the branches while keeping the history.
pull request - it allows developers to notify the team leader that they have made a bug fix or added a new feature; this should be reviewed and merged into the main branch.
git bash is a command line interface for running git commands.
Git Commands:
git init - initialize a repository. eg- git init <repo name>
git config - for setting up your email, user.
eg- 1. git config --global user.email "joshi.email.com"
eg- 2. git config --global user.name "name"
git version - to check which version of git we are using.
git status - shows the current state of the repo and what changes you have made and what not. eg- git status.
git add - adds a file to the staging area. eg- git add <file>
git commit -m "msg" - saves the changes locally.
git reset - this will undo all local changes and also delete the uncommitted work.
eg- git reset --hard HEAD
git diff - shows the differences which are not yet staged. eg- git diff filename.
git clone - it will copy a repo from an existing URL. eg- git clone <url>
git push - upload your commits to GitHub.
eg- git push origin main
git pull - download changes from GitHub to your local.
eg- git pull origin branch name
git branch - list all branches.
git checkout -b <branchname> - create and switch to a new branch.
git merge - merge another branch into the current one.
eg- git merge <merge-branch-name>
git log - committed history.
git fetch - get changes without merging. eg- git fetch origin branch name
git remote -v - shows remote repo details.
git revert <commit-id> - undoes previous changes.
git fetch - only downloads the changes from remote, doesn't affect your working code. Git fetch can also be used to check the changes before applying them.
git pull - it downloads and merges them, which might change your code immediately.
git merge - combines the branches while keeping history; it is used in teams.
Jenkins
Jenkins is an open-source automation tool written in the Java programming language that allows continuous integration. Jenkins builds and tests software projects, making it easier for developers to integrate changes into the project and for users to get fresh results. Organizations use Jenkins pipelines to speed up their software development through automation.
Workflow:
Developers push their code to GitHub > Jenkins triggers a build and pulls the latest code > Jenkins runs automated tasks > Jenkins deploys and sends notifications to the developers.
Advantages:
It is an open-source tool, free of cost, and widely used.
* It does not require any additional installations, making it easy to install.
* Supports 1000 plugins to ease your work.
* Jenkins also supports cloud-based architecture, so we can deploy Jenkins on a cloud-based platform.
Disadvantages:
* Its interface is outdated and not user-friendly.
* It can be confusing and complex for beginners.
* We have to keep checking and updating plugins manually.
* If the plugins break and aren't updated, it can cause problems.
* Jenkins pipeline: is a group of plugins that helps you to create and manage the automated process to build, test, and deploy your software. The goal is to make the whole process automated, fast, and reliable so every change in your code goes through the same process without manual work.
* ### Why Jenkins:
* Automates everything, which helps in saving time and causes fewer human errors.
* Huge plugin support to connect with tools.
* Free to use and it is open source.
Jenkins file syntax
* pipeline{}
* agent any (node/docker)
* stages (build test and deploy)
* steps
* echo (‘deploying app’)
* post
### Extra points:
* Freestyle project - it is used in simple build, test, and deploy tasks. Beginner-friendly.
* Pipeline project - writing a script to do multiple jobs step by step. Suitable for large projects.
* Build trigger - it is a way to start a Jenkins job automatically.
* SCM - Source Code Management - connects Jenkins to version control tools.
* Artifact - it is the output of the build.
## Docker
* Docker is a containerization tool that helps developers package the application along with its dependencies and libraries, making it lightweight (shared OS) and portable containers that can run on any system.
### Why Docker:
* Packages everything together.
* Fast and lightweight, using less memory than VMs.
* Works on all platforms and is easier to test and deploy.
workflow:
* Write a Dockerfile.
* Build a Docker image.
* Run it in a container.
* Push to Docker Hub.
* Pull and deploy anywhere.
Dockerfile - it is a text document that contains commands to create a custom Docker image.
* Docker image - it is used to create containers and contains dependencies and container applications.
* Docker container - it contains entire packages to run an application.
* Docker Hub - it is a public repository containing thousands of images and is open source.
* Docker volume - it is used to store data even after the container stops and restarts.
* Docker network - allows containers to communicate with each other.
* Docker Compose - it is used to run multiple containers together using a YAML (YAML Ain't Markup Language) file.
kernel - main part of the OS that manages hardware and communicate with the hardware. (brain of OS)
Docker commands:
docker build -t image
– Builds an image from a Dockerfile.docker run
– Runs a container from an image.docker push username/myapp:v1
– Pushes an image to Docker Hub.docker logs container_id
– Shows container logs.docker ps
– Lists running containers.docker stop
– Stops a container.docker rm
– Removes a container.docker rmi
– Removes an image.docker exec
– Runs a command inside a running container.docker-compose
– Runs multi-container applications using adocker-compose.yml
file.docker volume create myvol
- Creates volumes.docker run -v myvol:/app nginx
- Mounts volume inside a container.docker network create mynet
- Creates a custom network.docker run --network=mynet nginx
- Attaches a container to a network.docker system prune
- Cleans unused containers, images, and networks.
Dockerfile syntax.
FROM - base image.
WORKDIR /APP - set the working directory inside the container.
COPY . /APP - copy files from host to container.
RUN - run a command during image build.
EXPOSE - expose port 80.
CMD - can be overridden \ ENTRYPOINT - hard to override.
VOLUME - declare mount point to persist/store data.
ANSIBLE
Ansible is an open source automation tool used for configuration deployment, service orchestration, and application deployment. It is agentless, meaning it does not require any software to be installed, and it also ensures that tasks do not repeat unnecessarily. It supports YAML language. Ansible uses playbooks to describe automation tasks, and playbooks use simple YAML language.
workflow:
install Ansible.
define inventory file (target machines).
create playbook (tasks to perform).
use modules for ad-hoc tasks.
automate with roles.
Components:
Modules - these are the predefined commands in Ansible that perform specific tasks like installing packages and managing users.
Role - these are structured playbooks that help to organize tasks and related files efficiently.
Inventory - a list of target services with their IP addresses.
Playbook - it is a YAML file that contains a set of tasks to be executed on a remote machine in an automated way.
Handlers - these are special tasks that run only when notified by another task or notifier (e.g., restarting a service).
Ad-hoc commands - these are simple one-line commands used to perform specific tasks (an alternative to writing a playbook).
Ansible Galaxy - it is used for finding, sharing, and downloading roles. You can use
ansible-galaxy install
to add your role to your playbook.Commands:
ansible --version - check the version of Ansible.
ansible all -m ping - ping all remote machines from your inventory to check the connectivity.
ansible-playbook playbook.yml - run a playbook.
ansible-playbook playbook.yml -i inventory - run a playbook with custom inventory.
ansible-playbook playbook.yml --start-at-task="task name" - start a playbook with specific tasks.
ansible-galaxy init myrole - create a new role directory structure.
ansible-galaxy list - list installed roles.
“I used Ansible to automate the setup oa LAMP (Linux, Apache, MySQL, PHP) stack on AWS EC2 instances by creating a playbook that installs Apache, configures firewall rules, deploys a PHP site, and ensures the service restarts if the config changes.”
playbook syntax:
install Apache (httpd) server on a remote machine and start the service.
name: Install and start Apache on EC2
hosts: servers
become: yes
tasks:
name: Install Apache
yum
name: httpd
state: present
name: Start Apache service
service:
name: httpd
state: started
enabled: yes
Kubernetes
K8s is an open-source container orchestration platform used to deploy, scale, and manage containerized applications like Docker containers. It helps to manage multiple containers efficiently in production environments.
Why K8s:
Automatically scales up and down based on demand.
Fault tolerance and high availability.
Self-healing - monitors pods and replaces unhealthy ones.
components:
Pods: The basic unit in Kubernetes is a pod, which can contain one or more containers.
Nodes: These are the machines (physical or virtual) where the pods run.
Cluster: A Kubernetes cluster is a group of nodes working together to run your applications.
Master and Worker nodes:
Kubernetes operates in a master-worker architecture, where the master node manages the entire cluster and the worker nodes run the applications (containers).
Master Node
The master node is the brain of the Kubernetes cluster. It controls and manages the worker nodes and ensures that the system is running as expected.
Scheduling: Decides which worker node will run each pod (containerized application).
API Server: Exposes the Kubernetes API that users or other systems interact with to deploy, manage.
Controller Manager: Ensures that the cluster is in the desired state, for example, if a pod goes down, it replaces it.
etcd: A key-value store that holds all the configuration data.
Worker Node
The worker nodes are where your application containers actually run.
Kubelet: Ensures that the containers in the pod are running and healthy.
Kube Proxy: Manages network communication between containers across different worker nodes, enabling them to discover each other.
Container Runtime: Software like Docker or containerd that runs the containers on the node.
In short, the master node is the brain that controls everything, while the worker nodes are the muscles that run the applications.
commands:
kubectl get pods
: Lists pods.kubectl get nodes
: Lists nodes in the cluster.kubectl get services
: Lists services.kubectl get deployments
: Lists deployments.kubectl get namespaces
: Lists namespaces.kubectl delete
: Deletes resources.kubectl describe
: Displays detailed information about a specific resource.kubectl logs
: Displays logs for a specific pod or container.kubectl exec
: Executes a command inside a container in a pod.
kubectl cluster-info
: Displays information about the cluster.kubectl version
: Displays the client and server versions.kubectl events
: Lists events for resources.
summary:
Kubernetes is the top choice for managing containerized apps in production. It provides a lot of automation, reliability, and scalability. Whether you're dealing with microservices, traditional apps, or large systems, Kubernetes helps make deploying and managing applications easier and more efficient.
Thank you!
Thank you so much for reading this blog. I really appreciate your interest in learning about DevOps tools with me. DevOps is always changing, and I hope this post gave you some useful insights into the tools that power today's software development and operations.
If you have any thoughts, suggestions, or questions, please share them in the comments. Your feedback is always welcome and helps me make my future posts better.
Thanks for your support and for joining me on this journey!
Subscribe to my newsletter
Read articles from Sudarshan Joshi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
