Kubernetes Namespaces: Simplifying Multi-Tenancy and Resource Isolation


In the dynamic world of Kubernetes, managing resources efficiently and securely is crucial, especially as applications scale and teams grow. This is where Kubernetes Namespaces come into play, offering a powerful way to simplify multi-tenancy and resource isolation within a cluster. Think of Namespaces as virtual walls that separate different environments or projects, allowing multiple teams to share the same cluster without stepping on each other's toes. By organizing resources into distinct namespaces, you can enforce policies, manage access, and allocate resources more effectively, all while maintaining a clean and organized cluster. Whether you're running development, testing, and production environments side by side, or supporting multiple teams with diverse projects, Kubernetes Namespaces provide the structure needed to keep everything running smoothly and securely.
Let's dive into the practical side of Kubernetes Namespaces. As you can see from this simple kubectl get namespace
command, my cluster currently includes the standard namespaces: default, kube-node-lease, kube-public, and kube-system
kubectl get namespace
Okay, let's break down those namespaces shown in the kubectl get namespace
output. They represent the fundamental building blocks for organizing resources within your Kubernetes cluster. Think of them as separate virtual containers, each with its own set of rules and access controls.
default: This is the catch-all namespace. When you don't specify a namespace when creating resources (like deployments or services), they automatically end up here. It's generally best practice to avoid using this namespace for anything beyond initial experimentation or very temporary resources. Over time, the default namespace can become cluttered and difficult to manage, making it harder to track and control your resources.
kube-node-lease: This namespace is automatically created and managed by Kubernetes itself. It's crucial for the internal workings of the cluster. It holds leases that track which nodes are currently running pods. You shouldn't directly interact with this namespace; any attempts to modify or delete resources within it will likely be blocked. It's purely for Kubernetes' internal bookkeeping.
kube-public: Similar to kube-node-lease, this is another system-managed namespace. It's designed to hold resources that are accessible to all users within the cluster. While you can technically create resources here, it's generally discouraged. The purpose is primarily for system-level services that need to be globally visible, and using it for your application resources would mix system components with your own, potentially leading to confusion and security risks.
kube-system: This is the most important system namespace. It contains core Kubernetes system components, such as the kube-proxy, the kubelet, and the etcd service. These are essential for the cluster's operation. Just like the other system namespaces, you should avoid directly interacting with kube-system. Any changes here could destabilize your entire cluster.
So, what exactly can you put inside a Kubernetes namespace? The command kubectl api-resources --namespaced=true
the image below gives us the answer: a wide array of resources, each with its own purpose and functionality.
Let's examine some key examples from this list generated from the output. Remember, the NAMESPACED
column indicates whether a resource is confined to a specific namespace (true) or cluster-wide (false). The apiVersion
shows the version of the Kubernetes API used to manage that resource type. This is important for compatibility and understanding potential feature differences.
Pods: These are the fundamental building blocks of Kubernetes applications. A Pod represents a running process, containing one or more containers. Think of it as the smallest deployable unit. Since
NAMESPACED
is true, each Pod lives within a specific namespace.Deployments: Deployments manage the desired state of a set of Pods. They handle scaling, rolling updates (updating your application without downtime), and rollbacks (reversing updates if something goes wrong). They're crucial for managing application versions and ensuring high availability. Again,
NAMESPACED
being true means they're namespace-bound.Services: Services provide a stable network endpoint for your Pods. Even if Pods are constantly being created and destroyed (e.g., during scaling or updates), the Service's IP address remains consistent, allowing other parts of your application or external clients to access your application reliably. Services are also namespaced.
Secrets: These are used to store sensitive information, such as passwords, API keys, and certificates, securely within your Kubernetes cluster. They're essential for keeping your application's credentials safe. Secrets, like other resources, are namespaced, ensuring that access is controlled at the namespace level.
Pods, Deployments, Services, and Secrets represent a core set of resources you'll use frequently when building and deploying applications on Kubernetes. Understanding how they interact and how they're managed within namespaces is key to building robust and scalable applications.
In contrast to the namespaced resources we just examined, the kubectl api-resources --namespaced=false
command (see image) reveals the cluster-wide resources. These are objects that affect the entire Kubernetes cluster, rather than being confined to individual namespaces. Notice the NAMESPACED
column now shows 'false' for all entries. This list includes critical components like Nodes, PersistentVolumes, and cluster-level roles and policies, highlighting the infrastructure that underpins your entire Kubernetes deployment.
Now let's see how easy it is to create a namespace using a declarative approach. The ns1.yaml file shown below demonstrates the simplicity of defining a namespace using YAML. This approach is preferred for its version control and reproducibility benefits.
Let's see this in action. Running kubectl apply -f ns1.yaml
will create the ns1 namespace based on the definition in our YAML file. Let's verify it's been successfully created in the next step with the Kubectl get namespace
For a quick and straightforward namespace creation, the imperative approach offers a simple alternative. As you can see in the image below, the command kubectl create namespace ns2
creates the namespace immediately. This is useful for quick tasks, but remember that declarative methods are generally better for version control and reproducibility.
It’s time to deploy something! Using the nginx-deployment.yaml configuration below, we'll deploy a single Nginx instance into each of our namespaces (ns1 and ns2). This will showcase how easily we can manage applications within their respective isolated environments. This single YAML file can deploy our Nginx instance to either ns1 or ns2. Simply change the namespace field to target the desired namespace, and then use kubectl apply -f nginx-deployment.yaml
to create the deployment.
Now, let's verify that our Nginx deployments are up and running in both namespaces. We can confirm this by checking how many Nginx pods are active in each.
We can see from the image above that the kubectl get pods
command, run separately for ns1 and ns2, shows one running Nginx pod in each. This clearly demonstrates the successful isolation of our deployments. Each namespace operates independently, with its own set of resources and applications.
Now that we've confirmed our Nginx deployments, let's get the IP address of each pod. The images above show the output of kubectl get pods -o wide --namespace ns1
and kubectl get pods -o wide --namespace ns2
. This command, with the -o wide
flag, provides additional information, including the IP address assigned to each pod. We see that the Nginx pod in ns1 has the IP address 10.244.3.5
, while the pod in ns2 has the IP address 10.244.1.5
.
Let's test inter-namespace communication. We'll execute a curl command from within the ns1 pod to access the Nginx instance in ns2. Surprisingly, as shown in the image below, the command kubectl exec -it -n ns1 nginx-5cb6d59647-sd9wp -- sh
"curl http://10.244.1.5" succeeded, returning the Nginx welcome page. This unexpected result indicates that, in this specific setup, network isolation between namespaces isn't fully enforced at the pod IP address level. This highlights the importance of using Kubernetes Services for reliable inter-namespace communication, as direct pod-to-pod communication can be unreliable and prone to issues due to pod scheduling and network configurations. While this experiment showed connectivity, relying on pod IPs directly is not a best practice; Services provide a stable and managed way to access applications across namespaces.
Direct pod-to-pod communication, as we saw earlier, isn't always reliable and prone to issues. Kubernetes Services provide a more robust solution. We'll create two Services, svc-ns1 and svc-ns2, to expose our Nginx deployments. The YAML files below define these Services, specifying the selector to match our Nginx pods remember the app: nginx
label we used in the deployments and using ClusterIP as the service type. This creates a stable internal IP address for each deployment, regardless of pod changes.
With our Service YAML files ready, let's create the Services. The image below shows the commands used: kubectl apply -f svc-ns1.yaml
and kubectl apply -f svc-ns2.yaml
. These commands deploy the Service definitions, creating the svc-ns1 and svc-ns2 Services in their respective namespaces (ns1 and ns2). The output confirms that the Services were successfully created.
Now let's verify that our services are up and running and get their internal IP addresses. The image shows the output of kubectl get svc -n ns1
and kubectl get svc -n ns2
. We can see that svc-ns1
has been assigned the ClusterIP 10.96.17.21
, and svc-ns2
has the ClusterIP 10.96.204.32
. These ClusterIPs are internal to the Kubernetes cluster and provide stable access points to our Nginx deployments.
Finally, let's retest our inter-namespace communication, this time using the Services' FQDNs. The images show the results of executing curl commands from within each namespace's Nginx pod, targeting the other namespace's Service. The FQDNs are constructed as <service-name>.<namespace>.svc.cluster.local
. For example, to access svc-ns2 from ns1, we use svc-ns2.ns2.svc.cluster.local
. As you can see, both curl commands were successful, returning the Nginx welcome page. This demonstrates that using Kubernetes Services provides a reliable and consistent way to access applications across namespaces, even as pods are rescheduled or replaced.
In this blog post, we explored Kubernetes Namespaces, demonstrating their effectiveness in simplifying multi-tenancy and resource isolation. We started by examining the default system namespaces and then showed how to create namespaces declaratively using YAML files and imperatively using kubectl commands. We then deployed a simple Nginx application to two newly created namespaces, verifying their independent operation. While direct pod-to-pod communication initially seemed possible, we highlighted the unreliability of this approach and demonstrated the superior stability and reliability of using Kubernetes Services for inter-namespace communication. By creating and using Services, we successfully accessed our Nginx deployments across namespaces, proving the value of namespaces for managing and isolating applications within a shared Kubernetes cluster. This approach ensures better resource management, enhanced security, and simplified multi-team collaboration.
Subscribe to my newsletter
Read articles from Obinna Iheanacho directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Obinna Iheanacho
Obinna Iheanacho
DevOps Engineer with a proven track record of streamlining software development and delivery processes. Skilled in automation, configuration management, and continuous integration and delivery (CI/CD), with expertise in cloud infrastructure and containerization technologies. Possess strong communication and collaboration skills, able to work effectively across development, operations, and business teams to achieve common goals. Dedicated to staying current with the latest technologies and tools in the DevOps field to drive continuous improvement and innovation.