Building a Real-world Kubernetes Operator: Part 7
Table of contents
Introduction
In this section, we will create Helm charts for both the operator and adapter, which will then be used to deploy them to a Kubernetes cluster.
Helm Chart
You're probably familiar with Helm, the package manager for Kubernetes. If not, think of it as a similar tool to operating system package managers but with enhanced capabilities like templating.
Helm simplifies the process of consuming or distributing Kubernetes applications. By managing Kubernetes packages, often referred to as charts, Helm has become an indispensable tool in the Kubernetes ecosystem over the past few years.
Nimbus
Let's create a new directory called deployment
to keep all the charts.
$ mkdir -p deployment && cd "$_"
Bootstrap a new chart for the operator:
helm create nimbus
You'll have similar files as follows:
$ tree nimbus
nimbus
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
4 directories, 10 files
These are bootstrapped files so let's update these accordingly. First, remove the unnecessary files.
# From deployment/nimbus/templates directory
rm -rf tests hpa.yaml ingress.yaml service.yaml
Update the Makefile
to make our life easier.
# Image URL to use all building/pushing image targets
IMG ?= anuragrajawat/nimbus
# Image Tag to use all building/pushing image targets
TAG ?= v0.1
...
...
.PHONY: manifests
manifests: controller-gen kustomize ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
$(CONTROLLER_GEN) rbac:roleName=nimbus-operator crd webhook paths=./internal/... paths=./api/... output:crd:artifacts:config=config/crd/bases
$(KUSTOMIZE) build config/crd > deployment/nimbus/templates/crds.yaml
.PHONY: docker-build
docker-build: ## Build docker image with the manager.
$(CONTAINER_TOOL) build -t ${IMG}:${TAG} --build-arg VERSION=${TAG} .
.PHONY: docker-push
docker-push: ## Push docker image with the manager.
$(CONTAINER_TOOL) push ${IMG}:${TAG}
PLATFORMS ?= linux/arm64,linux/amd64
.PHONY: docker-buildx
docker-buildx: ## Build and push docker image for the manager for cross-platform support
# copy existing Dockerfile and insert --platform=${BUILDPLATFORM} into Dockerfile.cross, and preserve the original Dockerfile
sed -e '1 s/\(^FROM\)/FROM --platform=\$$\{BUILDPLATFORM\}/; t' -e ' 1,// s//FROM --platform=\$$\{BUILDPLATFORM\}/' Dockerfile > Dockerfile.cross
- $(CONTAINER_TOOL) buildx create --name project-v3-builder
$(CONTAINER_TOOL) buildx use project-v3-builder
- $(CONTAINER_TOOL) buildx build --push --platform=$(PLATFORMS) --build-arg VERSION=${TAG} --tag ${IMG}:${TAG} -f Dockerfile.cross . || { $(CONTAINER_TOOL) buildx rm project-v3-builder; rm Dockerfile.cross; exit 1; }
- $(CONTAINER_TOOL) buildx rm project-v3-builder
rm Dockerfile.cross
...
...
We updated the manifests
target so that CRDs will remain in sync. We don't have to copy and paste those in the templates directory manually.
Update the Dockerfile
present in the project's root directory as follows:
# Build the manager binary
FROM golang:1.22 AS builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
RUN go mod download
# Copy the go source
COPY cmd/ cmd/
COPY api/ api/
COPY internal/controller/ internal/controller/
COPY pkg/builder/ pkg/builder
COPY pkg/utils/ pkg/utils
# Build
# the GOARCH has not a default value to allow the binary be built according to the host where the command
# was called. For example, if we call make docker-build in a local env which has the Apple Silicon M1 SO
# the docker BUILDPLATFORM arg will be linux/arm64 when for Apple x86 it will be linux/amd64. Therefore,
# by leaving it empty we can ensure that the container and binary shipped on it will have the same platform.
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/main.go
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532
ENTRYPOINT ["/manager"]
Now update the helm template files:
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "nimbus.fullname" . }} labels: {{- include "nimbus.labels" . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: {{- include "nimbus.selectorLabels" . | nindent 6 }} template: metadata: labels: {{- include "nimbus.labels" . | nindent 8 }} spec: serviceAccountName: {{ include "nimbus.serviceAccountName" . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} resources: {{- toYaml .Values.resources | nindent 12 }}
NOTES.txt
Thank you for installing Nimbus. Your release is named '{{ include "nimbus.fullname" . }}' and installed in {{ .Release.Namespace }}.
Create a
rbac.yaml
required for roles and their bindings:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: {{- include "nimbus.labels" . | nindent 4 }} name: leader-election-role namespace: nimbus rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: - get - list - watch - create - update - patch - delete --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: {{ include "nimbus.serviceAccountName" . }} labels: {{- include "nimbus.labels" . | nindent 4 }} rules: - apiGroups: - intent.security.nimbus.com resources: - securityintentbindings verbs: - create - delete - get - list - patch - update - watch - apiGroups: - intent.security.nimbus.com resources: - securityintentbindings/status verbs: - get - patch - update - apiGroups: - intent.security.nimbus.com resources: - securityintents verbs: - create - delete - get - list - patch - update - watch - apiGroups: - intent.security.nimbus.com resources: - securityintents/status verbs: - get - patch - update - apiGroups: - intent.security.nimbus.com resources: - nimbuspolicies verbs: - create - delete - get - list - patch - update - watch - apiGroups: - intent.security.nimbus.com resources: - nimbuspolicies/status verbs: - get - patch - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: {{- include "nimbus.labels" . | nindent 4 }} name: leader-election-rolebinding namespace: nimbus roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: leader-election-role subjects: - kind: ServiceAccount name: {{ include "nimbus.serviceAccountName" . }} namespace: nimbus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: {{- include "nimbus.labels" . | nindent 4 }} name: {{ include "nimbus.serviceAccountName" . }} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: {{ include "nimbus.serviceAccountName" . }} subjects: - kind: ServiceAccount name: {{ include "nimbus.serviceAccountName" . }} namespace: nimbus
serviceaccount.yaml
{{- if .Values.serviceAccount.create -}} apiVersion: v1 kind: ServiceAccount metadata: name: {{ include "nimbus.serviceAccountName" . }} labels: {{- include "nimbus.labels" . | nindent 4 }} automountServiceAccountToken: {{ .Values.serviceAccount.automount }} {{- end }}
values.yaml
replicaCount: 1 image: repository: anuragrajawat/nimbus pullPolicy: IfNotPresent tag: "v0.1" nameOverride: "" fullnameOverride: "nimbus-operator" serviceAccount: create: true automount: true name: "nimbus-operator" podSecurityContext: fsGroup: 2000 securityContext: capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 resources: limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi
Create a Readme.md
file for the installation:
# Install Nimbus
Install Nimbus operator using the official helm chart.
```shell
helm repo add nimbus https://anurag-rajawat.github.io/charts
helm repo update 5gsec
helm upgrade --install nimbus-operator anurag-rajawat/nimbus -n nimbus --create-namespace
```
Install Nimbus using Helm charts locally (for testing)
```shell
cd deployments/nimbus/
helm upgrade --install nimbus-operator . -n nimbus --create-namespace
```
## Values
| Key | Type | Default | Description |
|------------------|--------|-----------------------|--------------------------------------------------------|
| image.repository | string | anurag-rajawat/nimbus | Image repository from which to pull the operator image |
| image.pullPolicy | string | IfNotPresent | Operator image pull policy |
| image.tag | string | v0.1 | Operator image tag |
## Uninstall the Operator
To uninstall, just run:
```bash
helm uninstall nimbus-operator -n nimbus
```
This is how we can add a helm chart. Let's add another chart for our first adapter.
Nimbus KubeArmor
Bootstrap a new chart for the adapter:
helm create nimbus-kubearmor
You should have a similar directory structure after removing unnecessary files:
$ tree nimbus-kubearmor
nimbus-kubearmor
├── Chart.yaml
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── `
│ ├── rolebinding.yaml
│ └── serviceaccount.yaml
└── values.yaml
2 directories, 8 files
Edit the files:
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "nimbus-kubearmor.fullname" . }} labels: {{- include "nimbus-kubearmor.labels" . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: {{- include "nimbus-kubearmor.selectorLabels" . | nindent 6 }} template: metadata: labels: {{- include "nimbus-kubearmor.labels" . | nindent 8 }} spec: serviceAccountName: {{ include "nimbus-kubearmor.serviceAccountName" . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} resources: {{- toYaml .Values.resources | nindent 12 }}
role.yaml
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nimbus-kubearmor labels: {{- include "nimbus-kubearmor.labels" . | nindent 4 }} rules: - apiGroups: - intent.security.nimbus.com resources: - nimbuspolicies verbs: - get - list - watch - apiGroups: - intent.security.nimbus.com resources: - nimbuspolicies/status verbs: - get - update - apiGroups: - intent.security.nimbus.com resources: - securityintentbindings verbs: - get - apiGroups: - security.kubearmor.com resources: - kubearmorpolicies verbs: - create - delete - get - list - update - watch
rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: {{ include "nimbus-kubearmor.fullname" . }} labels: {{- include "nimbus-kubearmor.labels" . | nindent 4 }} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: {{ include "nimbus-kubearmor.fullname" . }} subjects: - kind: ServiceAccount name: {{ include "nimbus-kubearmor.serviceAccountName" . }} namespace: {{ .Release.Namespace }}
serviceaccount.yaml
{{- if .Values.serviceAccount.create -}} apiVersion: v1 kind: ServiceAccount metadata: name: {{ include "nimbus-kubearmor.serviceAccountName" . }} labels: {{- include "nimbus-kubearmor.labels" . | nindent 4 }} automountServiceAccountToken: {{ .Values.serviceAccount.automount }} {{- end }}
values.yaml
replicaCount: 1 image: repository: anuragrajawat/nimbus-kubearmor pullPolicy: IfNotPresent tag: "v0.1" nameOverride: "" fullnameOverride: "nimbus-kubearmor" serviceAccount: create: true automount: true name: "nimbus-kubearmor" podSecurityContext: fsGroup: 2000 securityContext: capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 65536 resources: limits: cpu: 50m memory: 64Mi requests: cpu: 50m memory: 64Mi # AutoDeploy KubeArmor with default configs kubearmor-operator: autoDeploy: true
Chart.yaml
apiVersion: v2 name: nimbus-kubearmor description: A Helm chart for Nimbus KubeArmor type: application # This is the chart version. This version number should be incremented each time you make changes # to the chart and its templates, including the app version. # Versions are expected to follow Semantic Versioning (https://semver.org/) version: 0.1.0 # This is the version number of the application being deployed. This version number should be # incremented each time you make changes to the application. Versions are not expected to # follow Semantic Versioning. They should reflect the version the application is using. # It is recommended to use it with quotes. appVersion: "1.16.0" dependencies: - name: kubearmor-operator version: ">= 1.4.3" repository: https://kubearmor.github.io/charts condition: kubearmor-operator.autoDeploy kubeVersion: ">= 1.25"
We've included KubeArmor as a dependency. This means that when you set the kubearmor-operator.autoDeploy
field to true
in the values.yaml
file, Helm will automatically install KubeArmor along with the adapter. As a result, there's no need to install the security engine separately.
Don't forget to create Readme.md
file:
# Install KubeArmor adapter
Install `nimbus-kubearmor` adapter using the official Helm chart.
```shell
helm repo add nimbus https://anurag-rajawat.github.io/charts
helm repo update nimbus
helm upgrade --dependency-update --install nimbus-kubearmor nimbus/nimbus-kubearmor -n nimbus
```
Install `nimbus-kubearmor` adapter using Helm charts locally (for testing)
```bash
cd deployments/nimbus-kubearmor/
helm upgrade --dependency-update --install nimbus-kubearmor . -n nimbus
```
## Values
| Key | Type | Default | Description |
|-------------------------------|--------|--------------------------------|----------------------------------------------------------------------------|
| image.repository | string | anuragrajawat/nimbus-kubearmor | Image repository from which to pull the `nimbus-kubearmor` adapter's image |
| image.pullPolicy | string | IfNotPresent | `nimbus-kubearmor` adapter image pull policy |
| image.tag | string | v0.1 | `nimbus-kubearmor` adapter image tag |
| kubearmor-operator.autoDeploy | bool | true | Auto deploy [KubeArmor](https://kubearmor.io/) with default configurations |
## Uninstall the KubeArmor adapter
To uninstall, just run:
```bash
helm uninstall nimbus-kubearmor -n nimbus
```
To build a container image we need a Dockerfile
so let's create it as well.
FROM golang:1.22 AS builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /nimbus
# relative deps requried by the adapter
COPY api/ api/
COPY pkg/ pkg/
COPY go.mod go.mod
COPY go.sum go.sum
# nimbus-kubearmor directory
ARG ADAPTER_DIR=pkg/adapter/nimbus-kubearmor
WORKDIR /nimbus/$ADAPTER_DIR
# # Copy Go modules and manifests
COPY $ADAPTER_DIR/go.mod go.mod
COPY $ADAPTER_DIR/go.sum go.sum
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
RUN go mod download
COPY $ADAPTER_DIR/builder builder
COPY $ADAPTER_DIR/cmd cmd
COPY $ADAPTER_DIR/manager manager
COPY $ADAPTER_DIR/watcher watcher
# Build
# the GOARCH has not a default value to allow the binary be built according to the host where the command
# was called. For example, if we call make docker-build in a local env which has the Apple Silicon M1 SO
# the docker BUILDPLATFORM arg will be linux/arm64 when for Apple x86 it will be linux/amd64. Therefore,
# by leaving it empty we can ensure that the container and binary shipped on it will have the same platform.
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -ldflags="-s" -o nimbus-kubearmor cmd/main.go
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /nimbus/pkg/adapter/nimbus-kubearmor .
USER 65532:65532
ENTRYPOINT ["/nimbus-kubearmor"]
And a Makefile
to make our life easier:
# Image URL to use all building/pushing image targets
IMG ?= anuragrajawat/nimbus-kubearmor
# Image Tag to use all building/pushing image targets
TAG ?= v0.1
CONTAINER_TOOL ?= docker
BINARY ?= bin/nimbus-kubearmor
CONTROLLER_TOOLS_VERSION ?= v0.14.0
LOCALBIN ?= $(shell pwd)/bin
$(LOCALBIN):
mkdir -p $(LOCALBIN)
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen
.PHONY: help
help: ## Display this help.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
.PHONY: build
build: ## Build nimbus-kubearmor executable.
@go build -ldflags="-s" -o ${BINARY} cmd/main.go
run: build ## Run nimbus-kubearmor.
@./${BINARY}
.PHONY: image
image: ## Build nimbus-kubearmor container image.
$(CONTAINER_TOOL) build -t ${IMG}:${TAG} --build-arg VERSION=${TAG} -f ./Dockerfile ../../../
.PHONY: push-image
push-image: ## Push nimbus-kubearmor container image.
$(CONTAINER_TOOL) push ${IMG}:${TAG}
PLATFORMS ?= linux/arm64,linux/amd64
.PHONY: imagex
imagex: ## Build and push container image for cross-platform support
sed -e '1 s/\(^FROM\)/FROM --platform=\$$\{BUILDPLATFORM\}/; t' -e ' 1,// s//FROM --platform=\$$\{BUILDPLATFORM\}/' Dockerfile > Dockerfile.cross
- $(CONTAINER_TOOL) buildx create --name project-v3-builder
$(CONTAINER_TOOL) buildx use project-v3-builder
- $(CONTAINER_TOOL) buildx build --push --platform=$(PLATFORMS) --build-arg VERSION=${TAG} --tag ${IMG}:${TAG} -f Dockerfile.cross ../../../ || { $(CONTAINER_TOOL) buildx rm project-v3-builder; rm Dockerfile.cross; exit 1; }
- $(CONTAINER_TOOL) buildx rm project-v3-builder
rm Dockerfile.cross
.PHONY: generate
generate: controller-gen ## Generate ClusterRole.
$(CONTROLLER_GEN) rbac:roleName=nimbus-kubearmor paths="./..." output:dir=../../../deployment/nimbus-kubearmor/templates/
.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary. If wrong version is installed, it will be overwritten.
$(CONTROLLER_GEN): $(LOCALBIN)
test -s $(LOCALBIN)/controller-gen && $(LOCALBIN)/controller-gen --version | grep -q $(CONTROLLER_TOOLS_VERSION) || \
GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION)
We've added the generate
target to create the ClusterRole automatically, so we don't have to create it manually.
Add the following markers in pkg/adapter/nimbus-kubearmor/manager/manager.go
file:
package manager
...
...
//+kubebuilder:rbac:groups=intent.security.nimbus.com,resources=securityintentbindings,verbs=get
//+kubebuilder:rbac:groups=intent.security.nimbus.com,resources=nimbuspolicies,verbs=get;list;watch
//+kubebuilder:rbac:groups=intent.security.nimbus.com,resources=nimbuspolicies/status,verbs=get;update
//+kubebuilder:rbac:groups=security.kubearmor.com,resources=kubearmorpolicies,verbs=get;create;delete;list;watch;update
func Run(ctx context.Context) {
...
...
}
The generate
make target uses these markers to generate the ClusterRole.
I left adding nimbus-kubearmor as a dependency in the nimbus-operator's chart for you to do as an exercise.
Adding a new Helm chart follows a similar process. I hope you now understand how to create Helm charts.
That's it for this part. You can find the complete code here.
Please feel free to comment or criticize :)
Summary
In this article, we walked through the creation of Helm charts for deploying the Nimbus operator and its KubeArmor adapter. The process includes generating the necessary directory structures, modifying the default templates to fit our requirements, and setting up convenient Makefile targets for building and pushing Docker images. Additionally, the guide covers updating RBAC configurations, creating service accounts, and leveraging KubeArmor as a dependency to ensure seamless integration. By the end, you will have a fully functional deployment setup for both the operator and the adapter, with installation instructions.
References
Subscribe to my newsletter
Read articles from Anurag Rajawat directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Anurag Rajawat
Anurag Rajawat
Anurag is a developer fueled by creating clean, efficient code. He's a quick learner who thrives in collaborative environments, constantly seeking out new technologies to conquer the next challenge.