Istio - Kubernetes Gateway API in Kind Cluster

Thirumurthi SThirumurthi S
9 min read

Istio with Kubernetes Gateway API

  • In this blog have detailed deploying Istio in Kind cluster and configure Kubernetes Gateway API.

Pre-requisites:

  • Docker desktop installed and running
  • Kind CLI
  • Helm CLI (v3.15.3+)
  • Understanding of Service Mesh (Istio basics and Gateway API)

Istio with Kubernetes Gateway API

  • Istio documentation recommends to use Kubernetes Gateway API since the release of version v1.1.
  • In the Istio Ingress Gateway VirtualService is created for routing, with Kubernetes Gateway API to route the HTTPRoute is created that configures the service of the app.

  • To configure Kubernetes Gateway API in Kind cluster the Gateway API CRD should be deployed first. In this example the standard install is used. Referred from https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml.

Summary of components deployed

  • Kind cluster creation with configuration. Few ports are exposed to access the apps.
  • Istio is deployed using helm charts. Istio - base and istiod charts are deployed with basic configuration. (Not production ready configuration).
  • Kiali Server (Istio Dashboard) is deployed using helm charts, configured to use the external services of Prometheus and Grafana deployed on monitor namespace.
  • Kubernetes Gateway API CRD deployed to cluster.
  • Gateway API custom resource deployed to cluster.
  • HTTPRoute configured and deployed to access the Kiali from browser.
  • Prometheus community chart deployed which installs Prometheus, Grafana and AlertManager. All the components are deployed in monitor namespace. (Not production ready configuration). Refer Chart documentation
  • Simple backend-app (NGINX) deployed to serve simple JSON response.
  • HTTPRoute configured and deployed to access the backend-app.

Note

  • The Istio helm charts are downloaded to local machine and then deployed.
  • It can also be deployed directly once the repo is added to helm CLI. Refer Istio documentation.

Representation of the app deployed in the Kind cluster

image

Kind Cluster creation

  • Create Kind cluster named istio-dev with below configuration
  • Few ports exposed in the configuration
    • Port 9001 and 9002 used to access the Prometheus and Grafana
    • 8180 exposed to access the gateway from the host machine. Note, once the Gateway service is created it has to be edited to configure the port.

Kind configuration

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
  extraPortMappings:
  - containerPort: 31000
    hostPort: 8180
    listenAddress: "127.0.0.1"
    protocol: TCP
  - containerPort: 31043
    hostPort: 9443
    listenAddress: "127.0.0.1"
    protocol: TCP
  - containerPort: 31100
    hostPort: 9001
    listenAddress: "127.0.0.1"
    protocol: TCP
  - containerPort: 31110
    hostPort: 9002
    listenAddress: "127.0.0.1"
    protocol: TCP

Command to create kind cluster

  • If the above kind cluster configuration is stored in a file cluster_config.yaml, use below command.
kind create cluster --config cluster_config.yaml --name istio-dev

Install Istio

Download the charts locally

  • To add the repo to the helm for the Istio charts
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
  • The Istio charts are download to charts/ folder. The Istio base and istiod charts to local with below commands
-- create and navigate charts folder
mkdir charts
cd charts/

-- download install helm base
helm pull istio/base --untar

-- download install helm istiod
helm pull istio/istiod --untar

Create namespace to deploy the Istio charts

  • To create istio-system namespace with below command, in which the Istio base and istiod charts will be deployed
kubectl create ns istio-system

Deploy the Istio base charts

  • Deploy Istio base charts to kind cluster.
  • The default revision is deployed in this case by passing default in value defaultRevision.
-- navigate to charts/base
cd charts/base
helm upgrade -i istio-base . -n istio-system --set defaultRevision=default

Deploy the Istio istiod charts

  • Deploy Istio istiod, before deploying Istio CRD make sure the Istio base is deployed
-- deploy the Istio/istiod
-- navigate to charts/istiod
cd charts/istiod
helm upgrade -i istiod . -n istio-system --wait

Chart status check

  • Once Charts are installed the status can be checked using below helm command. Chart status should be deployed.
$ helm ls -n istio-system

NAME         NAMESPACE       REVISION   UPDATED                       STATUS          CHART                   APP VERSION
istio-base   istio-system    1          2024-09-01 08:10:11 -0700 PDT deployed        base-1.23.0             1.23.0
istiod       istio-system    1          2024-09-01 08:10:28 -0700 PDT deployed        istiod-1.23.0           1.23.0

Deploy Kuberented Gateway API

  • To deploy the Gateway API CRD, use below command.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml
Create and deploy Gateway API custom resource
  • The Kubenetes Gateway API custom resource YAML content looks like below (refer kuberenetes-gateway-api.yaml).
  • Note, in case of Istio Ingress Gateway the apiVersion would be networking.istio.io/v1alpha3.
# fileName: kuberenetes-gateway-api.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: gateway
  namespace: istio-ingress
spec:
  gatewayClassName: istio
  listeners:
  - name: default
    #hostname: "127.0.0.1"  # commented here but we can use "*.example.com" refer docs
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
        from: All
Deploy the Gateway API resource
  • Create namespace istio-ingress and deploy the Gateway resource, use below command.
-- create namespace command
kubectl create ns istio-ingress

-- deploy the gateway api resource
kubectl apply -f kuberenetes-gateway-api.yaml
  • Once the Gateway is deployed, to validate and check the status of the service use below command.
kubectl -n istio-ingress get svc
  • Output might look like below with random nodeports assigned
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
gateway-istio   LoadBalancer   10.96.201.160   <pending>     15021:32753/TCP,80:30526/TCP   8m20s

Edit the Gateway API service to update the nodePort to match Kind cluster configuration

  • Edit the Gateway service using kubectl -n istio-ingress edit svc/gateway-istio.
  • The nodePort(s) has to be updated like in below snippet.
  • The incoming traffic from port 8180 will be routed to 31000 nodePort, already updated in Kind cluster configuration.
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    ports:
    - appProtocol: tcp
      name: status-port
      nodePort: 31021  #<--- update to 31021 port not used for this example
      port: 15021
      protocol: TCP
      targetPort: 15021
    - appProtocol: http
      name: default
      nodePort: 31000  # <--- update this port from any to 31000 configured in kind config
      port: 80
      protocol: TCP
      targetPort: 80

Deploy Kiali Server

  • To deploy the Kiali server using helm chart, add the repo to helm CLI and download to local charts folder.
-- add repo to the helm cli
helm repo add kiali https://kiali.org/helm-charts
helm repo update
  • Navigate to charts folder and download the charts to local.
cd charts/
helm pull kiali/kiali-server --untar
  • Execute below command from the downloaded charts folder kaili-server.

Note :-

  • The external_service urls are configured to access the Prometheus and Grafana. These components will be deployed by single chart later.
  • The components are deployed to monitor namespace, hence the url pattern is <service-name>.<namespace>:port
helm upgrade -i kiali-server . \
--set auth.strategy="anonymous" \
--set external_services.prometheus.url="http://prometheus-operated.monitor:9090" \
--set external_services.grafana.url="http://prometheus-grafana.monitor:80" \
--set external_services.grafana.in_cluster_url="http://prometheus-grafana.monitor:80" \
-n istio-system
  • Verify status of the chart deployment and the status might look like below
$ helm ls -n istio-system
NAME         NAMESPACE      REVISION   UPDATED                       STATUS          CHART                   APP VERSION
istio-base   istio-system    1         2024-09-01 08:10:11 -0700 PDT deployed        base-1.23.0             1.23.0
istiod       istio-system    1         2024-09-01 08:10:28 -0700 PDT deployed        istiod-1.23.0           1.23.0
kiali-server istio-system    1         2024-09-01 08:34:58 -0700 PDT deployed        kiali-server-1.89.0     v1.89.0

Create HTTPRoute resource to access Kiali server

  • Create HTTPRoute resource for kiali as shown in the below yaml content, save the config in a file named kiali-http-route.yaml
# filename: kiali-http-route.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: kiali-http
  namespace: istio-system
spec:
  parentRefs:
  - name: gateway
    namespace: istio-ingress
  hostnames: ["127.0.0.1"]  # without hostname the service would not be accessible
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /kiali
    backendRefs:
    - name: kiali
      port: 20001
Deploy Kiali HTTPRoute
  • To deploy the kiali route use below command.
-- the resource will be deployed to istio-system namespace and uses the Gateway API previously deployed
kubectl apply -f kiali-http-route.yaml

Accessing Kiali from browser

  • The address to access the Kiali UI is - http://127.0.0.1:8180/kiali

The Kiali UI looks like in below snapshot. The Nginx backend app, Prometheus and Grafana where deployed here in the snapshot.

image

image

image

Note :- Kiali UI might throw warning messages if the Prometheus chart is not deployed.

Deploy Prometheus

  • Add repo to helm local and download the chart to local charts folder
  • Note, below is not a production ready configuration, requires further hardening through configuration. Refer documentation.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
cd charts/
helm pull prometheus-community/kube-prometheus-stack --untar

Create namespace and deploy the charts to Kind cluster

  • To create namespace monitor and deploy Prometheus, use below set of commands
-- create namespace
kubectl create ns monitor

-- navigate to downloaded charts
cd charts/kube-prometheus-stack

-- deploy prometheus
helm upgrade -i prometheus . \
--namespace monitor \
--set prometheus.service.nodePort=31100 \
--set prometheus.service.type=NodePort \
--set grafana.service.nodePort=31110 \
--set grafana.service.type=NodePort \
--set alertmanager.service.nodePort=31120 \
--set alertmanager.service.type=NodePort \
--set prometheus-node-exporter.service.nodePort=31130 \
--set prometheus-node-exporter.service.type=NodePort

Status check of Prometheus chart deployment

  • Verify the status of chart deployment use the helm command mentioned below and the output of the status will be similar to below snippet.
$ helm ls -n monitor
NAME        NAMESPACE   REVISION  UPDATED                       STATUS    CHART                           APP VERSION
prometheus  monitor     1         2024-09-01 08:43:38 -0700 PDT deployed  kube-prometheus-stack-62.3.1    v0.76.0

Access Prometheus and Grafana

  • Accessing Prometheus
    • To access the Prometheus use http://127.0.0.1:9001/ or http://localhost:9001

image

  • Accessing Grafana
    • To access the Grafana use http://127.0.0.1:9002/ or http://localhost:9002, when prompted use username: admin and password: prom-operator. image

image

Creating the NGNIX backend app

Deploy the backend app to kind cluster

  • Below YAML content is resource defintion for the backend apps including namespace, service and deployment.
  • The namespace label istio-injection: enabled will automatically create the Istio Proxy when the backend pod is created.
  • Save the YAML content to a file named backend_app_deployment.yaml.
# filename: backend_app_deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: backend-app
  labels:
    istio-injection: enabled
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: backend-nginx-config
  namespace: backend-app
data:
  nginx.conf: |
    worker_processes auto;
    error_log stderr notice;
    events {
      worker_connections 1024;
    }
    http {
      variables_hash_max_size 1024;

      log_format main '$remote_addr - $remote_user [%time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
      access_log off;
      real_ip_header X-Real-IP;
      charset utf-8;

      server {
        listen 80;

        location /greet {
          default_type application/json;
          return 200 '{"status":"OK","message":"Greetings!! from server","current_time":"$time_iso8601"}';
        }
      }
    }

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-server
  namespace: backend-app
  labels:
    app: backend-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-server
  template:
    metadata:
      labels:
        app: backend-server
    spec:
      volumes:
       - name: nginx-config
         configMap:
           name: backend-nginx-config
           items:
           - key: nginx.conf
             path: nginx.conf
      containers:
      - name: backend-server
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
         - name: nginx-config
           mountPath: /etc/nginx
        resources:
          requests: 
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: backend-svc
  namespace: backend-app
spec:
  selector:
    app: backend-server
  ports:
    - name: tcp-port
      protocol: TCP
      port: 8081
      targetPort: 80
---
  • To deploy the backend app issue below command
-- the namespace will be created note, the label istio-injection is enabled which will create a envoy proxy sidecar automatically 
kubectl apply -f backend_app_deployment.yaml

Define HTTPRoute for the backend app

  • To access the backend we need to define a HTTPRoute like below and save it to a file named app-httproute.yaml
# app-httproute.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http
  namespace: backend-app
spec:
  parentRefs:
  - name: gateway
    namespace: istio-ingress
  hostnames: 
   - "127.0.0.1"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /greet
    backendRefs:
    - name: backend-svc
      port: 8081
  • To deploy the HTTPRoute use below command.
kubectl apply -f app-httproute.yaml

Access the Backend app API

  • From browser use http://127.0.0.1:8180/greet should see the response like below

image

0
Subscribe to my newsletter

Read articles from Thirumurthi S directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Thirumurthi S
Thirumurthi S