First Impressions of Knative Eventing
You might have heard of Knative – the serverless application development platform on top of Kubernetes. A lot of buzzwords, I know, but in essence it provides multiple APIs for deploying your applications without thinking (too much) about Kubernetes machinery.
At the moment, Knative can be differentiated into three components:
Knative Serving - Deploying HTTP-based applications – think of AWS Lambda or Google Cloud Functions
Knative Build - Helps you to avoid touching containers at all. You provide code and the rest will happen automagically.
Knative Eventing - Helps you to build event-driven applications 💥
You might know that I'm a huge fan of event-driven architectures. This is why we will explore Knative Eventing in this post a little bit.
What do I need?
Due to the nature of Knative as a serverless platform on top of Kubernetes, you need – 🎉 – a Kubernetes cluster with an installed Service Mesh, like Istio. No worries, if you don't know what this Istio-thingy is all about. Please ignore it for the moment and just treat it as a requirement.
For everyone who is not a proud owner of a Kubernetes cluster yet, I recommend creating one on the Google Cloud Platform (GCP). In general the following steps are required:
Create a cluster on GCP with the Istio addon
Install Knative
To keep things DRY and not duplicating content, you can find a good description in the official docs of Knative: Install on Google Kubernetes Engine.
⚠️ Don't be quenched by the node autoscaling configuration (1 - 10 nodes). For this demo, you can just remove --enable-autoscaling --min-nodes=1 --max-nodes=10
and replace it with --num-nodes 1
. This results in a single node cluster which is sufficient for our little exploration here.
Our goal
So what is our goal here? I thought of a simple scenario which demonstrates all the essential key pieces without to much ceremony involved. The result is the following simple use case:
Establishing an emitter which sends a JSON-formatted event every minute and having an application which spins up automatically and just displays the respective event.
Let the events flow ...
The described use case above can be translated into the following architecture which demonstrates how the flow of events can be achieved with Knative Eventing:
Although that might look complex, the introduced indirection with the Broker
and Trigger
makes a lot of sense in the end. Let us analyze it piece by piece.
Broker
The Broker
acts as a central eventing gateway. It receives events from sources and delegates it to the respective subscribers who are interested in the event.
In our scenario, we will create broker in the default
namespace by executing:
kubectl label namespace default knative-eventing-injection=enabled
You can verify the creation of the broker via:
$ kubectl get broker
NAME READY REASON HOSTNAME AGE
default True default-broker.default.svc.cluster.local 22s
Event Source
Acts as an origin of events. Imagine a microservice kind of architecture. In such a scenario you would have multiple of such event sources. Each of them will emit domain specific events.
There are several possibilities to establish an actual event source. Beside coding own sources, Knative Eventing ships with a bunch of predefined of those. To name a few:
GitHub: Repository / Organization events, like: PR created, commits pushed, etc.
Apache Kafka: Stream Kafka messages to Knative
Kubernetes: Brings Kubernetes cluster events to Knative
You can find a complete list of existing sources in the Knative docs.
Another interesting predefined component is the Cron Job
source. It allows to emit messages in a timely manner. This is exactly the one, we want to use in our demo here:
# filename: source.yaml
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: CronJobSource
metadata:
name: event-emitter
spec:
schedule: "* * * * *"
data: '{"message": "Hello world!"}'
sink:
apiVersion: eventing.knative.dev/v1alpha1
kind: Broker
name: default
This is everything you need to define a Cron Job
source. You can apply it via:
kubectl apply -f source.yaml
Afterwards, the event-emitter
will ...
... send the event defined in
data
every minute... send this message to the
default broker
Service
The service is your actual application. The one which is responsible for receiving the respective event and performing some business logic. In our example, we just use an existing container image which takes a received event and prints it on stdout
.
The definition looks like:
# service.yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: dumper
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/github.com/knative/eventing-sources/cmd/event_display
The service will be alive after executing kubectl apply -f service.yaml
and the logs can be digested via:
kubectl logs -f -l serving.knative.dev/service=dumper -c user-container --since=10m
Although we have applied Broker
, Source
and the Service
you might be wondering why you can't see any output yet. This leads us to our last missing puzzle piece, the Trigger
Trigger
A Trigger
is an interesting beast to be honest. It acts as an indirection layer in which you can define which events should be send to which service. It basically binds a respective event type to a specific service. Nice and declarative, eh?
# trigger.yaml
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: trigger
spec:
subscriber:
ref:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: dumper
So after a kubectl apply -f trigger.yaml
, we basically created a component which triggers the service every time a new event arrives. You should see an output in the logs of the service from now on.
Conclusion
In some of my previous customer projects we created such an eventing infrastructure ourselves. You may already have had the experience to build such a thingy. It is quite a lot of work, right? Establishing the broker infrastructure (via RabbitMQ, Apache Kafka, etc.) and defining all the event routing manually just to name two aspects of the overall work. Knative Eventing ships with the correct primitives IMHO. It enables you to perform all the work you had to do manually in the past in a nice, declarative way.
My assumption is that Knative in general is facing a bright future. A lot of engineers just avoided touching Kubernetes due to its high learning curve. You might consider the ecosystem again. Knative is here to let you focus on shipping code without a lot of infrastructural overhead.
Subscribe to my newsletter
Read articles from André König directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by