Getting started with Istio Service Mesh


(Aren't AI generated images just…… interesting)
To carry on with my current learning and discovery with Kubernetes, I’ve written about creating a cluster the same guide that I used to get my homelab cluster running for practising with passing the CKS and CKAD certifications. I’ve also written and talked about GitOps with Argo CD. I even wrote about my trials and tribulations with Flux CD and Argo CD (Shameless plugs to previous articles over…).
So let’s check out networking! Everyone loves networking… when it’s working.
So why would networking with Kubernetes be a “thing” then? Surely the cluster sits on a network and just works?! Well, kind of…..
With Kubernetes, we have potentially broken down a monolith application where everything lived on a single box or VM and just contacted localhost:some_port or maybe another VM on the same network, maybe some firewall rules. We’ve now broken this down to different applications, which are now pods part of deployments, potentially multiple versions for testing, maybe even on different clusters.
What I’m trying to say is that we have introduced some added complexity for some benefits, which I won't go into as I’m sure if you're reading this, you know what they are. But we have to think about how all these distributed pods and applications interact or don’t interact with each other.
So what exactly is Istio?
Istio uses a proxy to intercept all your network traffic, allowing a broad set of application-aware features based on the configuration you set.
The control plane takes your desired configuration, and its view of the services, and dynamically programs the proxy servers, updating them as the rules or the environment changes.
The data plane is the communication between services. Without a service mesh, the network doesn’t understand the traffic being sent over, and can’t make any decisions based on what type of traffic it is, or who it is from or to. The data plane comprises Envoy sidecar proxies injected into application pods, handling actual traffic routing, security, and observability.
Seriously though, why?
The more our applications scale and potentially grow to additional deployments, stateful sets or just pods (please don’t just run pods…) they all need some networking or interaction with other deployments, pods etc.
If you're starting with a hello-world app that has a frontend and backend, maybe a database then of course a service mesh will be overkill, but think about if we introduce multiple versions of these deployments. Or maybe we add more services. Like some reviews for our website? Or an ad service? Does a shopping cart get added too? It starts to add up.
With a service mesh, we can abstract the networking configuration layer and decouple the networking away from the application, this keeps the application code more reusable and we can manage the networking without having to redeploy parts of the application.
With a service mesh networking is just part of it, and its security also. Just because all the pods live on the same cluster should they all be able to contact and have access to every pod on the cluster? No! Would you have every VM on your network have access to every other VM? I hope not, and the same should apply to your Kubernetes deployments.
Additional benefits to using a service mesh and decoupling network configuration
Traffic management
Istio allows users to control traffic flows and API calls between services by configuring rules and routing traffic. We can be quite granular in how and what our pods and applications can interact with on the network.
Security
Istio provides a backbone for communications and manages security controls. Adding an additional layer of security and adding to the principle of least privilege.
Observability
Istio can extract telemetry data from proxy containers and send it to a monitoring dashboard. As we can see with a Grafana dashboard we now get a huge amount of observablity into how our network is performing, more data gives us more insight allowing us to make not just better technical decisions and spot trends but also business decisions and trends too.
You can do some other neat things like fault injection like adding delays to test the resiliency of your configuration and traffic shaping.
These are more advanced features I’ll be looking at in the near future but they really add to the feature set to why you would look to add Istio service mesh to your infrastructure.
Hopefully, you can start to see why you introduce a service mesh. Having the network configuration decoupled from application logic, enabling fine-grained control and observability.
Sounds good! How do I get started?
Now we have an idea of why we might want to get started with a service mesh, let’s take a look at Istio. You’ll need a cluster to install it on, the rest is fairly straightforward.
I’m not going to regurgitate the Istio quickstart guide found here you can go there and go through the guides, it’s a good showcase on getting started and the sample application gives a good account of a distributed application where having a service mesh would be worth the time, effort and added complexity.
You install the Istioctl command line tool, then install Istio itself on the cluster, deploy the sample application and then enable istio on the namespace you are working with.
The Kiali dashboard helps visualise Istio and the configured applications, you can find it in the Istio repo in the samples/addons:
Grafana dashboard for the Istio services we are monitoring via Prometheus, we now get a ton of insight and observability into how our services are performing. Grafana and Prometheus deployments and config can be found in the samples/addons part of the repo:
So I followed the guide, what did I just do? How does Istio…. Istio?
What’s neat, is how Istio works. It’s essentially a proxy where the network configuration is enabled to the workloads/applications by injecting a sidecar container to the pods, here’s the productpage
pod which shows the productpage
container and the istio-proxy
container as a sidecar:
Istio operates by injecting a sidecar proxy (Envoy) into each pod in your application. This proxy handles all incoming and outgoing traffic for the pod, allowing Istio to control and observe traffic without requiring changes to your application code. These sidecars work together under the direction of the Istio control plane, which manages configuration, policy enforcement, and telemetry.
Add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later:
kubectl label namespace default istio-injection=enabled
For example, when you deploy the productpage
Pod, Istio injects an Envoy proxy alongside the application container. This proxy intercepts traffic, applying Istio’s routing rules, security policies, and telemetry collection:
Pod: productpage
|-- Container: productpage
|-- Container: istio-proxy
This architecture ensures that network configuration remains decoupled from application logic, enabling fine-grained control and observability.
The first time I tried to work with Istio, nothing was happening…. I forgot to label the namespace I was working in! :face_palm
also, keep an eye out for any existing resource quotas and/or network policies in place that might stop Istio from working or behaving as expected.
k label namespace dev-three-tier istio-injection=enabled
k -n dev-three-tier scale deployments backend --replicas 0
k -n dev-three-tier scale deployments frontend --replicas 0
k -n dev-three-tier get deployments.apps
k -n dev-three-tier scale deployments backend --replicas 2
k -n dev-three-tier scale deployments frontend --replicas 1
Here I’ve just added the label to another namespace in my cluster dev-three-tier (haven’t added the db yet!) which is a very simple frontend backend and soon, DB. I then scaled down and scaled back the replicas in the deployment and Isto has now injected the sidecar proxy container into each of the pods.
Send some traffic to the frontend. while :; do curl -s
http://192.168.1.43
; sleep 1; done
I haven’t added a gateway or any rules yet but it’s a very simple app and Istio is probably overkill, I just wanted to show how you can add Istio to other workloads in your cluster quite simply.
I’m still learning about Istio but really enjoying seeing some real world context and use case, I wanted to use this article to go through how to get started, what I found useful and write some things down in the hope that this helps anyone, maybe demystify and explain in words and context that is more approachable.
As always, drop a comment if I got this wrong or missed something or if there is something else I should check out, I’m planning on checking out Linkerd in more detail but if anything else, I’m all ears!
Links:
Subscribe to my newsletter
Read articles from Ferris Hall directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ferris Hall
Ferris Hall
I’m a Google Cloud certified Platform Engineer and a Google authorized trainer. Linux sysadmin background now working in the Google cloud platform. I'm passionate about building and deploying infrastructure systems, automation, driving change and empowering people in learning and development. An Authorised certified Google Cloud Trainer and enjoy sharing what I have learnt, best practices, Google Cloud and general DevOps with people getting started on their journey.