Templating cluster creation with Tanzu Mission Control

Will ArroyoWill Arroyo
7 min read

Overview

I have had a question come up a few times with customers and coworkers about how to reduce duplication when creating clusters with Tanzu Mission Control(TMC). The question or issue that is usually brought up is that the platform engineering team wants to be able to create clusters quickly and many of the settings between cluster creation are the exact same, thus having a lot of duplication between clusters. When looking at the TMC UI there's not a way to set custom defaults today to be able to remove the need to fill in every field each time you create a cluster. However, using the UI is probably not the approach a platform team wants to take to scale anyway. It's much more efficient to codify the clusters and automate the creation. In this post, we will walk through creating cluster templates and using the Tanzu CLI to create clusters with minimal inputs. We will focus on TKG Clusters mostly, but I will also provide some commands that work with AKS and EKS clusters are well.

Brief note on the Tanzu CLI

The Tanzu CLI can now be installed through standard package managers or directly via the binary from GitHub, see the install instructions here. The TMC standalone CLI is in the process of being deprecated so you will want to use the new plugins for the Tanzu CLI. They will be installed when you connect the Tanzu CLI to your TMC endpoint.

Templating clusters

Getting the base template

The CLI provides a way to get an existing cluster's spec, I always recommend going with this approach over trying to create one from scratch. Before we start templating the cluster, create a cluster through the UI. Then use the below command to pull the cluster's yaml spec back and save it in a file.

# for TKG 
tanzu tmc cluster get <cluster-name> -m <mgmt-cluster> -p <provisioner> > template.yml
## for EKS
tanzu tmc ekscluster get <cluster-name> -c <credential> -r <region> > template.yml
## for AKS
tanzu tmc akscluster get <cluster-name> -c <credential> -r <resource-group> -s <subscription> > template.yml

The contents of the above command will look slightly different depending on the k8s provider you are using, but all of them will be a yaml file that describes all of the settings for the cluster.

Remove any extra fields

Just like when pulling back a resource from a k8s cluster there will be fields that we will want to omit from our template. For example the status section. This will be different between providers, but because we are focused on TKG in this example I have listed the field that I omitted below. Technically many of these fields do not need to be removed since the API will just ignore them, but since we are creating a template and don't want a bunch of extra stuff that might be confusing we will remove anything that is not needed.

  • fullname.orgId

  • meta

    • labels

      • tmc.cloud.vmware.com/creator
    • annotations

    • creationTime

    • generation

    • resourceVersion

    • uid

    • updateTime

  • spec.topology.variables

    • extensionCert

    • user - if you would like to provide your ssh public key, keep this one.

    • clusterEncryptionConfigYaml

    • TKR_DATA

  • status - entire section

All of the fields above are generated by TMC when the cluster is created. Your YAML file should now look similar to the one below.

fullName:
  managementClusterName: h2o-4-19340
  name: tmc-base-template
  provisionerName: lab
meta:
  labels:
    example-label: example
spec:
  clusterGroupName: default
  tmcManaged: true
  topology:
    clusterClass: tanzukubernetescluster
    controlPlane:
      metadata:
        annotations:
          example-cp-annotation: example
        labels:
          example-cp-label: example
      osImage:
        arch: amd64
        name: ubuntu
        version: "20.04"
      replicas: 3
    network:
      pods:
        cidrBlocks:
        - 172.20.0.0/16
      serviceDomain: cluster.local
      services:
        cidrBlocks:
        - 10.96.0.0/16
    nodePools:
    - info:
        name: md-0
      spec:
        class: node-pool
        metadata:
          labels:
            exmaple-np-label: example
        osImage:
          arch: amd64
          name: ubuntu
          version: "20.04"
        overrides:
        - name: vmClass
          value: best-effort-large
        - name: storageClass
          value: vc01cl01-t0compute
        replicas: 2
    variables:
    - name: defaultStorageClass
      value: vc01cl01-t0compute
    - name: storageClass
      value: vc01cl01-t0compute
    - name: storageClasses
      value:
      - vc01cl01-t0compute
    - name: vmClass
      value: best-effort-large
    - name: ntp
      value: time1.oc.vmware.com
    version: v1.23.8+vmware.2-tkg.2-zshippable
type:
  kind: TanzuKubernetesCluster
  package: vmware.tanzu.manage.v1alpha1.managementcluster.provisioner.tanzukubernetescluster
  version: v1alpha1

Templating with YTT

There are many options for templating files, but since this is a YAML file and we like Carvel YTT that is what is used for this example. I would highly recommend reading up on YTT and testing it out for different use cases, it's a very powerful YAML templating language.

Determine the variable fields

first, we need to determine which fields should be variable. This could be any field, but we want to reuse as much as possible as well. These fields are entirely up to you and your needs.

Template the fields with YTT

the YTT docs explain how to use data values in a YAML file. This is what we will be using to template the file. Using the same file from above the below template is what I have come up with.

#@ load("@ytt:data", "data")
fullName:
  managementClusterName: #@ data.values.mgmt_cluster_name
  name: #@ data.values.cluster_name
  provisionerName: #@ data.values.provisioner
meta:
  #@ if/end hasattr( data.values, "cluster_labels"):
  labels: #@ data.values.cluster_labels
spec:
  clusterGroupName: #@ data.values.cluster_group
  tmcManaged: true
  topology:
    clusterClass: tanzukubernetescluster
    controlPlane:
      metadata:
        #@ if/end hasattr( data.values, "cp_annotations"):
        annotations: #@ data.values.cp_annotations
        #@ if/end hasattr( data.values, "cp_labels"):
        labels: #@ data.values.cluster_labels
      osImage:
        arch: amd64
        name: ubuntu
        version: "20.04"
      replicas: 3
    network:
      pods:
        cidrBlocks:
        - 172.20.0.0/16
      serviceDomain: cluster.local
      services:
        cidrBlocks:
        - 10.96.0.0/16
    nodePools:
    - info:
        name: md-0
      spec:
        class: node-pool
        metadata:
          #@ if/end hasattr( data.values, "node_labels"):
          labels: #@ data.values.node_labels
        osImage:
          arch: amd64
          name: ubuntu
          version: "20.04"
        overrides:
        - name: vmClass
          value: #@ data.values.cp_vm_size
        - name: storageClass
          value: vc01cl01-t0compute
        replicas: 2
    variables:
    - name: defaultStorageClass
      value: vc01cl01-t0compute
    - name: storageClass
      value: vc01cl01-t0compute
    - name: storageClasses
      value:
      - vc01cl01-t0compute
    - name: vmClass
      value: #@ data.values.worker_vm_size
    - name: ntp
      value: time1.oc.vmware.com
    version: v1.23.8+vmware.2-tkg.2-zshippable
type:
  kind: TanzuKubernetesCluster
  package: vmware.tanzu.manage.v1alpha1.managementcluster.provisioner.tanzukubernetescluster
  version: v1alpha1

You can see that a number of fields now have YTT logic in them. Here is a quick breakdown of what we are doing.

  • #@ load("@ytt:data", "data") - tell ytt to load data values into the data object

  • #@ data.values.mgmt_cluster_name - I won't go through every variable but this syntax is used to pull out a value from the values file that we will create in the next section.

  • #@ if/end hasattr( data.values, "cp_annotations"): - this syntax is also used a few times, this check to see if our values file has a field and if it does then it adds the field below. This is used becuase certain fields are optional.

There is a lot more that can be done when templating with YTT and this is a fairly basic example. The docs on YTT have a lot of example that can be referenced.

Create a values file

The values file is what we will use to specify the values for all of the fields that we have templated. This is really just a yaml file with a single line of YTT at the top that let's the YTT engine know that the fields are to be used as data values. Since this is just plain yaml, you can also have nested fields etc. Below you can see the values file that was created to work with the above template.

#@data/values
---
mgmt_cluster_name: h2o-4-19340
cluster_name: cluster-from-template
provisioner: lab
cp_vm_size: best-effort-large
worker_vm_size: best-effort-large
cluster_group: default
cluster_labels:
  test: test

Create a new cluster

Finally, we can combine these two files into a command that with generate our cluster configuration and then apply it to TMC.

If you want to test the outputs of the templating prior to sending it to TMC you can simply run the below command which will generate the resulting yaml and output to stdout.

ytt -f values.yml -f template.yml

The next command will generate the resulting yaml and instead of sending it stdout , it will pass it directly to the Tanzu CLI and start creating the cluster.

tanzu tmc apply -f  <(ytt -f values.yml -f template.yml)

If you are using EKS or AKS the command is slightly different since support is not yet added to the apply command for those clusters. Hopefully it will be added soon. You can still do this with the create and update commands though. See the examples below.

# redirecting output does not work currently for the create commands

#EKS
ytt -f values.yml -f template.yml > eks.yml
tanzu tmc ekscluster create -f eks.yml

#AKS
ytt -f values.yml -f template.yml > aks.yml
tanzu tmc akscluster create -f aks.yml

Summary

I summary, this article should give you a good idea of how to make re-usable templates for TMC created clusters. This could even be used to create "plans" for clusters so that it makes self service easier for teams. An example of using this for slef service would be to have a pipeline that executes the apply commands and allow developers, operators, etc. to manage their values file in a git repo. This would provide a nice gitops driven way to create on demand clusters with the ability to apply policy and restrict which fields could be changed.

0
Subscribe to my newsletter

Read articles from Will Arroyo directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Will Arroyo
Will Arroyo

Hi I am Will Arroyo. I currently work at VMware Tanzu as Senior Staff Solutions Engineer helping customers adopt and scale cloud native technologies.