Getting Started with Kubernetes Using Minikube


Kubernetes is a powerful platform for managing containerized applications. In this guide, we'll walk you through the process of setting up a local Kubernetes environment using Minikube, installing kubectl, running and managing pods, and more.
Installing Minikube
Minikube requires drivers like Docker, VirtualBox, KVM, etc., as prerequisites for installation. In this blog, we will be using Docker as the driver for setting up Minikube.
To get started, you can also follow the official Minikube documentation to install Minikube on your system. Once installed, you can start Minikube, which will create a local Kubernetes cluster for you to work with.
Installing Minikube in Amazon Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm sudo rpm -Uvh minikube-latest.x86_64.rpm
Here is a short description of the commands:
curl -LO
https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
: Downloads the latest Minikube RPM package.-L
follows redirects,-O
saves the file with its original name.sudo rpm -Uvh minikube-latest.x86_64.rpm
: Installs or upgrades the Minikube package.-U
for upgrade/install,-v
for verbose output,-h
shows progress.sudo
runs the command with superuser privileges.
You can directly copy the installation commands from the official website for your target platform.
Start Minikube
To start Minikube, you typically use the command
minikube start
This command initializes a local Kubernetes cluster using the specified driver, in this case Docker, and sets up the necessary environment for running Kubernetes applications.
Minikube starts a Kubernetes control-plane node, which is essential for managing the cluster. Finally, it sets the active Kubernetes context, enabling the kubectl
tool to connect and interact with the cluster easily.
Best practice: Use this without sudo
— especially when using the Docker driver, because it doesn’t like being run as root. Throws this error:
❌ "The 'docker' driver should not be used with root privileges."
Use this command:
minikube start --force
to bypasses Minikube’s built-in safety checks, allowing it to run as root or under otherwise discouraged conditions.
The command makes Minikube ignore warnings or errors, like those about running as root, so the cluster can start even if things aren't perfect. It's helpful if you have to run as root, like on an EC2 instance without a non-root user, or if you're sure about what you're doing and want to skip safety checks.
Check Minikube Start Status
To check the Minikube start status, you can verify whether the Minikube Docker container is running.
docker ps
Setting Up kubectl
kubectl
is the command-line tool used to interact with Kubernetes clusters. You can find detailed instructions for installing kubectl
on Linux in the Kubernetes official documentation or refer to the AWS documentation for additional guidance.
After installation, ensure kubectl is accessible from your command line by copying it to your binary folder.
Download kubectl
using curl
Referring - AWS documentation
Download the kubectl
binary for your cluster’s Kubernetes version from Amazon S3. (Kubernetes 1.33
)
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.33.0/2025-05-01/bin/linux/arm64/kubectl
This will download kubectl in the desired directory.
Use this command to make it executable:
chmod +x kubectl
Check if Kubectl is Installed
./kubectl version --client
Copy kubectl
to the binary folder for execution without using ./
sudo cp kubectl /usr/local/bin/
To be Noted
/usr/bin
vs /usr/local/bin
— A Quick Primer
Origin:
/usr/bin
holds system-managed packages;/usr/local/bin
is for user-installed tools.Update Safety: System updates may overwrite
/usr/bin
;/usr/local/bin
stays untouched.PATH Priority:
/usr/local/bin
often comes first, so its versions override/usr/bin
ones.
If you're building or installing custom DevOps tools, /usr/local/bin
is your safe zone.
Running a Pod
With Minikube and kubectl set up, you can now run your first pod.
Running a pod in Kubernetes involves deploying a containerized application within the cluster. A pod is the smallest deployable unit in Kubernetes and can contain one or more containers.
For example, running a Nginx pod:
kubectl run nginx --image=nginx
#syntax
kubectl run <pod-name> --image=<container-image>
Check pods status
kubectl get pods
Container inside container setup
Pod is like having a container inside the Minikube container, which itself runs as a container on your base system. This setup allows you to test and develop applications in a controlled environment, ensuring they function correctly before deploying them to a production environment.
In the image below, you can see the nginx image as a Docker container inside the Minikube container.
docker exec -it minikube bash
docker ps
Also, in the image below, you can see how to access the inside of the nginx container.
Nginx running inside the container -
Managing Pods and Deployments
Kubernetes makes it easy to manage your applications. You can delete pods when they are no longer needed and create deployments to ensure your applications remain running.
Deleting a Pod
kubectl delete pod nginx
#syntax
kubectl delete pod <pod-name>
Creating Deployments
A deployment will automatically restart a pod if it is deleted or stopped, ensuring your application stays in the desired state.
kubectl create deployment mydep1 --image=nginx
#syntax
kubectl create deployment <deployment-name> --image=<image-name>
Check deployment status:
kubectl get deployments
In the example below, the deployment automatically launches a new pod with the nginx image as soon as the pod is removed.
Scaling the Deployments
Kubernetes allows you to scale deployments to handle increased load and maintain performance. When demand rises, you can adjust the number of pod replicas to manage extra traffic.
Scaling can be manual or automatic, based on your needs. By increasing replicas, Kubernetes balances the workload across pods, ensuring responsiveness and reliability. This feature is essential for applications with varying traffic, helping maintain a smooth user experience during peak times.
Scale your deployment:
kubectl scale deployment mydep1 --replicas=5
#syntax
kubectl scale deployment <deployment-name> --replicas=<number-of-replicas>
Exposing Deployments
To make your applications accessible outside Kubernetes, you need to expose specific ports. This allows external traffic to reach your app, enabling communication with users or services beyond the internal network. Exposing ports is essential for applications that require external access.
To achieve this, we will use a service.
In Kubernetes (K8s), a service is an abstraction that defines a logical group of pods and a policy for accessing them. Services allow different parts of an application, like a frontend and a backend, to communicate by providing a stable endpoint for accessing the pods, even as they are created and destroyed dynamically. This ensures that the application components can interact reliably.
In Kubernetes, you can use the kubectl
command-line tool to create a service directly. Here are the steps:
Create a Service: Use the
kubectl expose
command to create a service. You need to specify the resource (like a pod or deployment) you want to expose, the type of service, and the port details. For example:kubectl expose deployment mydep1 --type=NodePort --port=80 --target-port=8080 #syntax kubectl expose deployment <deployment-name> --type=<service-type> --port=<port> --target-port=<target-port>
Verify the Service: After creating the service, you can verify it by running:
kubectl get service
This command will list all the services in your cluster, allowing you to check if your service is running as expected.
Access the Service: Depending on the type of service you created (ClusterIP, NodePort, or LoadBalancer), access it accordingly:
ClusterIP: Accessible only within the cluster.
NodePort: Access the service using the node's IP and the assigned port.
LoadBalancer: Access the service through the external load balancer's IP.
Now we can see that it is exposed outside the Kubernetes environment.
Conclusion
In conclusion, setting up a local Kubernetes environment with Minikube is a great way to get hands-on experience. This guide covered installing Minikube and kubectl, managing pods, creating deployments, and exposing applications. These skills are essential for working in Kubernetes. As you learn more, you'll find features for scaling and deploying applications. Mastering Kubernetes will help you build strong applications.
Subscribe to my newsletter
Read articles from Jasai Hansda directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Jasai Hansda
Jasai Hansda
Software Engineer (2 years) | In-transition to DevOps. Passionate about building and deploying software efficiently. Eager to leverage my development background in the DevOps and cloud computing world. Open to new opportunities!