KubeCon Paris 2024: A Journey Through the Cloud Native Stack
This is a short write-up on my visit to KubeCon in Paris this year. I will set out some ideas I encountered. These ideas fit into broad themes: provisioning and managing infrastructure, building a developer platform and micro-services tooling. Conceptually, the three mini-essays build on top of each-other from a lower to a higher level of abstraction.
Provisioning and Managing Infrastructure: OpenTofu
OpenTofu forked from Hashicorp’s terraform project last year, and your team may be considering migrating.
The OpenTofu Powerusers Panel comprised one core OpernTofu developer and several professionals who use Terraform and more recently OpenTofu - on a large scale. The migration process was briefly covered with several panellists saying that it went smoothly. Many conversations centred around the limitations of Terraform and how the OpenTofu community has the freedom to address these.
How might OpenTofu depart from terraform?
HCL is a declarative domain-specific language lacking many of the primitives and “language features” we expect from a programming language. Proponents of simplicity argued that such a declarative language – which maps nearly onto the JSON representations of cloud resources that live in state files – is a big advantage; WET (write everything twice) can be better than DRY (don’t repeat yourself) because it makes fewer cognitive demands. On the other hand, tools like Terragrunt are gaining popularity because of a real demand for a DRY-er version of Terraform. With this in mind, OpenTofu could position itself as a more full-featured alternative to Terraform by incorporating the better parts of Terragrunt.
Some panellists went further, in suggesting that Tofu-flavoured HCL could be treated as a full-fledged programming language. The idea of adding if
was floated a few times, although how this would work in practice is unclear. if
belongs to the semantics of imperative control flow, and as such, would demand that HCL were composed of blocks of statements executed in sequence; if we want that, we can use Pullumi with Python or Typescript. Realistically I think we’re stuck with the ternary operator in OpenTofu as far as conditionals are concerned. Perhaps what people truly wanted from if
was a way to say "Only execute this block for HCL under these conditions" without adding a count
to every resource; there’s already an issue that asks for something very similar.
Rather than making HCL an imperative programming language, panellists also raised the possibility of enabling developers to use languages other than Go to extend OpenTofu. Go is a useful language but in trying to be simple it lacks expressiveness. Including other languages could reduce the barrier to entry for application developers and rob Pullumi of its unique selling point.
Conclusion
Whatever the future of theOpenTofu project, the issues section of its GitHub repo is worth keeping an eye on. Now is the time to upvote any changes we want and maybe even create new issues.
Building a Developer Platform With Kubernetes
Kube was built to run containers.
Kube was never built for platforms.
Why Kubernetes Is Inappropriate for Platforms, and How to Make It Better
We now move onto something much higher level than provisioning cloud infrastructure: building platforms.
This was an ambitious talk about platforms that are multi-tenant, multi-region and multi-cloud. For the speakers, Kubernetes provides a uniform API with which to build a platform but is not a platform in and of itself. The overarching goal is to give users of the platform access to Kubernetes-cluster-like workspaces while abstracting away where they’re hosted and how users are separated from each other. It’s worth pointing out at the beginning that one of the purposes of this rather abstract talk is to promote the platform-building framework kcp.io .
Platform User Personas
The core idea of their talk is that a platform has three main user personas: platform owner, service owner and user.
The platform owner wants to give users a well-defined and well-documented platform to make their jobs easy. They don’t want to worry about details such as individual cloud providers and want the platform to be horizontally scalable without implementing this from scratch. They want to give users services to consume but let service providers focus on making these services good rather than dictating which services must be used.
The service providers want to offer users global services as products. In practice, this involves deploying Kubernetes controllers; for it to work well, the service providers need to be able to deploy controllers without knowing whether the service is single-region or multi-region. To service providers, compute – whether it’s managed by Kubernetes or something else – should just be another API. My intuition here is that they’re almost proposing building what we think of as cloud services on top of Kubernetes, by treating Kubernetes in much the same way as AWS Lambda treats the EC2 service.
Users are developers or application owners. They get access to workspaces, or “decoupled multi-tenant distributed control planes”. However, the control planes should be presented to the user in a logically connected manner; the user should be able from app to app, region to region, and cloud to cloud without worrying about kubeconfigs.
Conclusion
The talk puts forward quite a compelling vision, but it does seem to imply offering services on a similar scale to AWS. Maybe a sufficiently large engineering organisation would benefit from all of the abstraction that workspaces provide, rather than giving engineers direct control plan access via Teleport for example.
Early in the talk, they mention competing tools like crossplane, karmada and the cluster API for managing multi-cluster setups. If you're considering adopting such a tool, it's up to you whether their talk makes a compelling case for kcp.io over other tools.
Platform Tooling for Microservices: Knative
Finally, let's discuss a talk which fits into the service provider persona above. The service here is serverless functions on top of Kubernetes.
The talk Knative Functions Deep-Dive: Why You Should Use Knative Functions For Your Next Microservices Application introduces developer-focused “functions” tooling provided by Knative project. The project started in 2018 and has been incubating in the CNCF since 2022. The project provides building blocks for Serverless in Kubernetes.
What does it do?
Knative has two main capabilities: serving and eventing.
Serving
Serving would give you something akin to AWS lambda functions running in your Kubernetes clusters. These request handlers could be written in any language and could independently autoscale, including to zero.
The Knative service custom resource manages versioned configuration which in turn manages deployments. This could no doubt be achieved using plain Kubernetes deployments and services, but it would be a lot more hands-on.
Eventing
Evening is what it sounds like. It’s a framework for event-driven systems. It supports RabbitMQ and Kafka as brokers.
Eventing allows events to be collected from sources, including Kafka and many AWS services including SQS, SNS, S3, Kinesis and DynamoDB. Messages are sent from these sources to event sinks. Sinks can be vanilla Kubernetes services, Knative services, custom resources and even KafkaSink. Channels are a bit like SNS topics and can be combined with subscriptions to fan out events to multiple sinks. Triggers can also be used to subscribe to events, and enable filtering of which events to receive. This summary is probably too high-level to convey how evening works. I’d certainly need more time with the docs to properly grok the system.
Anyone adopting Knative eventing would also need to adopt CloudEvents' event schemas. CloudEvents is a graduated CNCF project; here are some example json payloads.
How does Knative fit into typical production deployments?
Installing the custom resource definitions?
For the sake of argument, typical here means using Helm as a way of packaging and deploying applications to Kubertnetes.
Despite some demand for a helm chart, YAML manifests and an operator are the only officially supported installation methods recommended for production. It should be possible to put the YAML into the templates
directory of a chart that we maintain and install without much fuss.
Deploying application workloads?
Workloads can be deployed using the kn
or func
CLI which requires access to a container registry and Kubernetes cluster.
Deploying a function creates an OCI container image for your function, and pushes this container image to your image registry. The function is deployed to the cluster as a Knative Service. Redeploying a function updates the container image and resulting Service that is running on your cluster.
While the Knative functions tooling simplifies the process of deploying application code, it competes with Helm as a way of deploying and updating Kubernetes resources. The path of least friction would be to build container images and reference these in manifests for Knative custom resources in Helm templates.
The kn
/ func
CLIs could still be used for development, but the idea of repeatedly calling kn func deploy
in a pipeline seems messy and overly imperative.
Conclusion
Overall Knative seems like a useful tool. It provides some helpful abstractions for engineers coming from a severless background and does the work of integrating kafka for you so you don't have to worry about writing consumers from scratch. If you're using Helm charts, you'd probably end up with some extra work packaging up templates for the Knative custom resources.
Final Remarks
We've reached the end of this whistle-stop tour through the cloud native stack. It started with the low-level provisioning of individual cloud resources using Terraform/OpenTofu, moved though a platform layer that abstracts the control plane away from individual kubernetes clusters, and ended with a service that application developers could use to develop microservices without getting bogged down in kubernetes manifests.
I enjoyed my time at Kubecon. It's always stimulating be exposed to new technologies and learn what engineering problems are being solved at some of the biggest companies in the world.
Subscribe to my newsletter
Read articles from Simon Crowe directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Simon Crowe
Simon Crowe
I'm a backend engineer currently working in the DevOps space. In addition to cloud-native technologies like Kubernetes, I maintain an active interest in coding, particularly Python, Go and Rust. I started coding over ten years ago with C# and Unity as a hobbyist. Some years later I learned Python and began working as a backend software engineer. This has taken me through several companies and tech stacks and given me a lot of exposure to cloud technologies.