Kube-Hetzner
About The Project
Hetzner Cloud is a good cloud provider that offers very affordable prices for cloud instances, with data center locations in both Europe and the US.
This project aims to create a highly optimized Kubernetes installation that is easy to maintain, secure, and automatically upgrades both the nodes and Kubernetes. We aimed for functionality as close as possible to GKE's Auto-Pilot. Please note that we are not affiliates of Hetzner, but we do strive to be an optimal solution for deploying and maintaining Kubernetes clusters on Hetzner Cloud.
To achieve this, we built up on the shoulders of giants by choosing openSUSE MicroOS as the base operating system and k3s as the k8s engine.
Why OpenSUSE MicroOS (and not Ubuntu)?
Optimized container OS that is fully locked down, most of the filesystem is read-only!
Hardened by default with an automatic ban for abusive IPs on SSH for instance.
Evergreen release, your node will stay valid forever, as it piggybacks into OpenSUSE Tumbleweed's rolling release!
Automatic updates by default and automatic rollbacks if something breaks, thanks to its use of BTRFS snapshots.
Supports Kured to properly drain and reboot nodes in an HA fashion.
Why k3s?
Certified Kubernetes Distribution, it is automatically synced to k8s source.
Fast deployment, as it is a single binary and can be deployed with a single command.
Comes with batteries included, with its in-cluster helm-controller.
Easy automatic updates, via the system-upgrade-controller.
Features
Maintenance-free with auto-upgrades to the latest version of MicroOS and k3s.
Multi-architecture support, choose any Hetzner cloud instances, including the cheaper CAX ARM instances.
Proper use of the Hetzner private network to minimize latency.
Choose between Flannel, Calico, or Cilium as CNI.
Optional Wireguard encryption of the Kube network for added security.
Traefik, Nginx or HAProxy as ingress controller attached to a Hetzner load balancer with Proxy Protocol turned on.
Automatic HA with the default setting of three control-plane nodes and two agent nodes.
Autoscaling nodes via the kubernetes autoscaler.
Super-HA with Nodepools for both control-plane and agent nodes that can be in different locations.
Possibility to have a single node cluster with a proper ingress controller.
Can use Klipper as an on-metal LB or the Hetzner LB.
Ability to add nodes and nodepools when the cluster is running.
Possibility to toggle Longhorn and Hetzner CSI.
Encryption at rest fully functional in both Longhorn and Hetzner CSI.
Optional use of Floating IPs for use via Cilium's Egress Gateway.
Proper IPv6 support for inbound/outbound traffic.
Flexible configuration options via variables and an extra Kustomization option.
It uses Terraform to deploy as it's easy to use, and Hetzner has a great Hetzner Terraform Provider.
Getting Started
Follow those simple steps, and your world's cheapest Kubernetes cluster will be up and running.
โ๏ธ Prerequisites
First and foremost, you need to have a Hetzner Cloud account. You can sign up for free here.
Then you'll need to have terraform or tofu, packer (for the initial snapshot creation only, no longer needed once that's done), kubectl cli and hcloud the Hetzner cli for convenience. The easiest way is to use the homebrew package manager to install them (available on Linux, Mac, and Windows Linux Subsystem).
brew tap hashicorp/tap
brew install hashicorp/tap/terraform # OR brew install opentofu
brew install packer
brew install kubectl
brew install hcloud
๐ก [Do not skip] Creating your kube.tf file and the OpenSUSE MicroOS snapshot
Create a project in your Hetzner Cloud Console, and go to Security > API Tokens of that project to grab the API key, it needs to be Read & Write. Take note of the key! โ
Generate a passphrase-less ed25519 SSH key pair for your cluster; take note of the respective paths of your private and public keys. Or, see our detailed SSH options. โ
Now navigate to where you want to have your project live and execute the following command, which will help you get started with a new folder along with the required files, and will propose you to create a needed MicroOS snapshot. โ
tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"
Or for fish shell:
set tmp_script (mktemp); curl -sSL -o "{tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh; chmod +x "{tmp_script}"; bash "{tmp_script}"; rm "{tmp_script}"
Optionally, for future usage, save that command as an alias in your shell preferences, like so:
alias createkh='tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"'
Or for fish shell:
alias createkh='set tmp_script (mktemp); curl -sSL -o "{tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh; chmod +x "{tmp_script}"; bash "{tmp_script}"; rm "{tmp_script}"'
For the curious, here is what the script does:
mkdir /path/to/your/new/folder cd /path/to/your/new/folder curl -sL https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/kube.tf.example -o kube.tf curl -sL https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/packer-template/hcloud-microos-snapshots.pkr.hcl -o hcloud-microos-snapshots.pkr.hcl export HCLOUD_TOKEN="your_hcloud_token" packer init hcloud-microos-snapshots.pkr.hcl packer build hcloud-microos-snapshots.pkr.hcl hcloud context create <project-name>
In that new project folder that gets created, you will find your
kube.tf
and it must be customized to suit your needs. โA complete reference of all inputs, outputs, modules etc. can be found in the terraform.md file.
๐ฏ Installation
Now that you have your kube.tf
file, along with the OS snapshot in Hetzner project, you can start the installation process:
cd <your-project-folder>
terraform init --upgrade
terraform validate
terraform apply -auto-approve
It will take around 5 minutes to complete, and then you should see a green output confirming a successful deployment.
Once you start with Terraform, it's best not to change the state of the project manually via the Hetzner UI; otherwise, you may get an error when you try to run terraform again for that cluster (when trying to change the number of nodes for instance). If you want to inspect your Hetzner project, learn to use the hcloud cli.
Usage
When your brand-new cluster is up and running, the sky is your limit! ๐
You can view all kinds of details about the cluster by running terraform output kubeconfig
or terraform output -json kubeconfig | jq
.
To manage your cluster with kubectl
, you can either use SSH to connect to a control plane node or connect to the Kube API directly.
Connect via SSH
You can connect to one of the control plane nodes via SSH with ssh root@<control-plane-ip> -i /path/to/private_key -o StrictHostKeyChecking=no
. Now you are able to use kubectl
to manage your workloads right away. By default, the firewall allows SSH connections from everywhere. Best to change that to your own IP by configuring the firewall_ssh_source
in your kube.tf file (don't worry, you can always change it for deploy if your IP changes).
Connect via Kube API
If you have access to the Kube API (depending on the value of your firewall_kube_api_source
variable, best to have the value of your own IP and not open to the world), you can immediately kubectl into it (using the clustername_kubeconfig.yaml
saved to the project's directory after the installation). By doing kubectl --kubeconfig clustername_kubeconfig.yaml
, but for more convenience, either create a symlink from ~/.kube/config
to clustername_kubeconfig.yaml
or add an export statement to your ~/.bashrc
or ~/.zshrc
file, as follows (you can get the path of clustername_kubeconfig.yaml
by running pwd
):
export KUBECONFIG=/<path-to>/clustername_kubeconfig.yaml
If chose to turn create_kubeconfig
to false in your kube.tf (good practice), you can still create this file by running terraform output --raw kubeconfig > clustername_kubeconfig.yaml
and then use it as described above.
You can also use it in an automated flow, in which case create_kubeconfig
should be set to false, and you can use the kubeconfig
output variable to get the kubeconfig file in a structured data format.
CNI
The default is Flannel, but you can also choose Calico or Cilium, by setting the cni_plugin
variable in kube.tf
to "calico" or "cilium".
Cilium
As Cilium has a lot of interesting and powerful config possibilities, we give you the ability to configure Cilium with the helm cilium_values
variable (see the cilium specific helm values) before you deploy your cluster.
Cilium supports full kube-proxy replacement. Cilium runs by default in hybrid kube-proxy replacement mode. To achieve a completely kube-proxy-free cluster, set disable_kube_proxy = true
.
It is also possible to enable Hubble using cilium_hubble_enabled = true
. In order to access the Hubble UI, you need to port-forward the Hubble UI service to your local machine. By default, you can do this by running kubectl port-forward -n kube-system service/hubble-ui 12000:80
and then opening http://localhost:12000
in your browser. However, it is recommended to use the Cilium CLI and Hubble Client and running the cilium hubble ui
command.
Scaling Nodes
Two things can be scaled: the number of nodepools or the number of nodes in these nodepools.
There are some limitations (to scaling down mainly) that you need to be aware of:
Once the cluster is up; you can change any nodepool count and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1); you can also rename a nodepool (if the count is to 0), but should not remove a nodepool from the list after once the cluster is up. That is due to how subnets and IPs get allocated. The only nodepools you can remove are those at the end of each list of nodepools.
However, you can freely add other nodepools at the end of each list. And for each nodepools, you can freely increase or decrease the node count (if you want to decrease a nodepool node count make sure you drain the nodes in question before, you can use terraform show
to identify the node names at the end of the nodepool list, otherwise, if you do not drain the nodes before removing them, it could leave your cluster in a bad state). The only nodepool that needs to have always at least a count of 1 is the first control-plane nodepool.
An advanced usecase is to replace the count of a nodepool by a map with each key representing a single node. In this case, you can add and remove individual nodes from a pool by adding and removing their entries in this map, and it allows you to set individual labels and other parameters on each node in the pool. See kube.tf.example for an example.
Autoscaling Node Pools
We support autoscaling node pools powered by the Kubernetes Cluster Autoscaler.
By adding at least one map to the array of autoscaler_nodepools
the feature will be enabled. More on this in the corresponding section of kube.tf.example.
Important to know, the nodes are booted based on a snapshot that is created from the initial control_plane. So please ensure that the disk of your chosen server type is at least the same size (or bigger) as the one of the first control_plane.
High Availability
By default, we have three control planes and three agents configured, with automatic upgrades and reboots of the nodes.
If you want to remain HA (no downtime), it's essential to keep a count of control planes nodes of at least three (two minimum to maintain quorum when one goes down for automated upgrades and reboot), see Rancher's doc on HA.
Otherwise, it is essential to turn off automatic OS upgrades (k3s can continue to update without issue) for the control-plane nodes (when two or fewer control-plane nodes) and do the maintenance yourself.
Automatic Upgrade
The Default Setting
By default, MicroOS gets upgraded automatically on each node and reboot safely via Kured installed in the cluster.
As for k3s, it also automatically upgrades thanks to Rancher's system upgrade controller. By default, it will be set to the initial_k3s_channel
, but you can also set it to stable
, latest
, or one more specific like v1.23
if needed or specify a target version to upgrade to via the upgrade plan (this also allows for downgrades).
You can copy and modify the one in the templates for that! More on the subject in k3s upgrades.
Configuring update timeframes
Per default, a node that installed updates will reboot within the next few minutes and updates are installed roughly every 24 hours. Kured can be instructed with specific timeframes for rebooting, to prevent too frequent drains and reboots. All options from the docs are available for modification.
โ ๏ธ Kured is also used to reboot nodes after configuration updates (registries.yaml
, ...), so keep in mind that configuration changes can take some time to propagate!
Turning Off Automatic Upgrades
If you wish to turn off automatic MicroOS upgrades (Important if you are not launching an HA setup that requires at least 3 control-plane nodes), you need to set:
automatically_upgrade_os = false
Alternatively ssh into each node and issue the following command:
systemctl --now disable transactional-update.timer
If you wish to turn off automatic k3s upgrades, you need to set:
automatically_upgrade_k3s = false
Once disabled this way you selectively can enable the upgrade by setting the node label k3s_update=true
and later disable it by removing the label or set it to false
again.
# Enable upgrade for a node (use --all for all nodes)
kubectl label --overwrite node <node-name> k3s_upgrade=true
# Later disable upgrade by removing the label (use --all for all nodes)
kubectl label node <node-name> k3s_upgrade-
Alternatively, you can disable the k3s automatic upgrade without individually editing the labels on the nodes. Instead, you can just delete the two system controller upgrade plans with:
kubectl delete plan k3s-agent -n system-upgrade
kubectl delete plan k3s-server -n system-upgrade
Also, note that after turning off node upgrades, you will need to manually upgrade the nodes when needed. You can do so by SSH'ing into each node and running the following commands (and don't forget to drain the node before with kubectl drain <node-name>
):
systemctl start transactional-update.service
reboot
Individual Components Upgrade
Rarely needed, but can be handy in the long run. During the installation, we automatically download a backup of the kustomization to a kustomization_backup.yaml
file. You will find it next to your clustername_kubeconfig.yaml
at the root of your project.
First create a duplicate of that file and name it
kustomization.yaml
, keeping the original file intact, in case you need to restore the old config.Edit the
kustomization.yaml
file; you want to go to the very bottom where you have the links to the different source files; grab the latest versions for each on GitHub, and replace. If present, remove any local reference to traefik_config.yaml, as Traefik is updated automatically by the system upgrade controller.Apply the updated
kustomization.yaml
withkubectl apply -k ./
.
Customizing the Cluster Components
Most cluster components of Kube-Hetzner are deployed with the Rancher Helm Chart yaml definition and managed by the Helm Controller inside k3s.
By default, we strive to give you optimal defaults, but if you wish, you can customize them.
For Traefik, Nginx, HAProxy, Rancher, Cilium, Traefik, and Longhorn, for maximum flexibility, we give you the ability to configure them even better via helm values variables (e.g. cilium_values
, see the advanced section in the kube.tf.example for more).
Adding Extras
If you need to install additional Helm charts or Kubernetes manifests that are not provided by default, you can easily do so by using Kustomize. This is done by creating one or more extra-manifests/kustomization.yaml.tpl
files beside your kube.tf
.
These files need to be valid Kustomization
manifests, additionally supporting terraform templating! (The templating parameters can be passed via the extra_kustomize_parameters
variable (via a map) to the module).
All files in the extra-manifests
directory and its subdirectories including the rendered versions of the *.yaml.tpl
will be applied to k3s with kubectl apply -k
(which will be executed after and independently of the basic cluster configuration).
See a working example in examples/kustomization_user_deploy.
You can use the above to pass all kinds of Kubernetes YAML configs, including HelmChart and/or HelmChartConfig definitions (see the previous section if you do not know what those are in the context of k3s).
That said, you can also use pure Terraform and import the kube-hetzner module as part of a larger project, and then use things like the Terraform helm provider to add additional stuff, all up to you!
Examples
Create or delete a snapshot
Use in Terraform cloud
Backup and restore a cluster
Deploy in a pre-constructed private network (for proxies etc)
Placement groups
Migratings from count-based nodepools to map-based
Use of delete protection
Debugging
First and foremost, it depends, but it's always good to have a quick look into Hetzner quickly without logging in to the UI. That is where the hcloud
cli comes in.
Activate it with
hcloud context create Kube-hetzner
; it will prompt for your Hetzner API token, paste that, and hitenter
.To check the nodes, if they are running, use
hcloud server list
.To check the network, use
hcloud network describe k3s
.To look at the LB, use
hcloud loadbalancer describe k3s-traefik
.
Then for the rest, you'll often need to log in to your cluster via ssh, to do that, use:
ssh root@<control-plane-ip> -i /path/to/private_key -o StrictHostKeyChecking=no
Then, for control-plane nodes, use journalctl -u k3s
to see the k3s logs, and for agents, use journalctl -u k3s-agent
instead.
Inspect the value of the k3s config.yaml file with: cat /etc/rancher/k3s/config.yaml
, see if it looks kosher.
Last but not least, to see when the previous reboot took place, you can use both last reboot
and uptime
.
Takedown
If you want to take down the cluster, you can proceed as follows:
terraform destroy -auto-approve
If you see the destroy hanging, it's probably because of the Hetzner LB and the autoscaled nodes. You can use the following command to delete everything (dry run option is available don't worry, and it will only delete resources specific to your cluster):
tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/cleanup.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"
As a one time thing, for convenience, you can also save it as an alias in your shell config file, like so:
alias cleanupkh='tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/cleanup.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"'
Careful, the above commands will delete everything, including volumes in your projects. You can always try with a dry run, it will give you that option.
Upgrading the Module
Usually, you will want to upgrade the module in your project to the latest version. Just change the version attribute in your kube.tf and terraform apply. This will upgrade the module to the latest version.
When moving from 1.x to 2.x:
Within your project folder, run the
createkh
installation command, see Do Not Skip section above. This will create the snapshot for you. Don't worry, it's non-destructive and will leave your kube.tf and terraform state alone, but will download the required other packer file.Then modify your kube.tf to use version >= 2.0, and remove
extra_packages_to_install
andopensuse_microos_mirror_link
variables if used. This functionality has been moved to the packer snapshot definition, see packer-template/hcloud-microos-snapshots.pkr.hlc.Then run
terraform init -upgrade && terraform apply
.
Contributing
๐ฑ This project currently installs openSUSE MicroOS via the Hetzner rescue mode, making things a few minutes slower. To help with that, you could take a few minutes to send a support request to Hetzner, asking them to please add openSUSE MicroOS as a default image, not just an ISO. The more requests they receive, the likelier they are to add support for it, and if they do, that will cut the deployment time by half. The official link to openSUSE MicroOS is https://get.opensuse.org/microos, and their OpenStack Cloud
image has full support for Cloud-init, which would probably very much suit the Hetzner Ops team!
Code contributions are very much welcome.
Fork the Project
Create your Branch (
git checkout -b AmazingFeature
)Develop your feature
In your kube.tf, point the
source
of module to your local clone of the repo.Useful commands:
# To cleanup a Hetzner project ../kube-hetzner/scripts/cleanup.sh # To build the Packer image packer build ../kube-hetzner/packer-template/hcloud-microos-snapshots.pkr.hcl
Update examples in
kube.tf.example
if required.Commit your Changes (`git commit -m 'Add some AmazingFeature')
Push to the Branch (
git push origin AmazingFeature
)Open a Pull Request targeting the
staging
branch.
Acknowledgements
k-andy was the starting point for this project. It wouldn't have been possible without it.
Best-README-Template made writing this readme a lot easier.
Hetzner Cloud for providing a solid infrastructure and terraform package.
Hashicorp for the amazing terraform framework that makes all the magic happen.
Rancher for k3s, an amazing Kube distribution that is the core engine of this project.
openSUSE for MicroOS, which is just next-level Container OS technology.
Subscribe to my newsletter
Read articles from linhbq directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by