Terraform: How about KVM provisioning?

Mike LogaciukMike Logaciuk
7 min read

What is a Terraform?

Terraform is an open-source infrastructure as a code tool that enables you to safely and predictably provision and manage infrastructure in any cloud. It is a popular infrastructure-as-code tool that allows you to automate the provisioning and management of infrastructure resources.

Terraform is an IAC tool, used primarily by DevOps teams to automate various infrastructure tasks. The provisioning of cloud resources, for instance, is one of the main use cases of Terraform. It’s a cloud-agnostic, open-source provisioning tool written in the Go language and created by HashiCorp.

With Terraform you can define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle.

QEMU/KVM

QEMU is short for Quick EMUlator.

As it stands, it can be used as an emulator, but also as a hypervisor.

As an emulator, it allows you to run software that would normally not be possible to run on your current operating system because of compatibility.

KVM is a type 1 hypervisor, which essentially means it can run on bare metal. QEMU is a type 2 hypervisor, which means that it runs on top of the operating system. In this case, QEMU will utilize KVM to utilize the machine’s physical resources for the virtual machines.

QEMU and KVM are both open-source virtualization solutions commonly used in Linux environments.

Requirements

For the sake of practice, you should have both QEMU and KVM installed, alongside Terraform.

For installation snippets, jump into my previous post.

Working with Terraform

First, we need to have a new directory to practice:

❯ mkdir -p ~/repos/terraform-kvm

Terraform supports two methods of obtaining providers:

  • directly from the provider repository e.g. GitHub

  • using the Terraform code and init command

For all available and official providers and modules, please visit Terraform Registry.

Instead of downloading providers manually from Git, will use the most common method - the second one.

First, we need to create main.tf file:

cd ~/repos/terraform-kvm
❯ code main.tf

Then, we should ensure that libvirtd is up and running:

❯ sudo systemctl status libvirtd.service
● libvirtd.service - Virtualization daemon
     Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; preset: enabled)
     Active: active (running) since Thu 2023-07-06 23:54:27 CEST; 10s ago
TriggeredBy: ● libvirtd-admin.socket
             ● libvirtd-ro.socket
             ● libvirtd.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
   Main PID: 32901 (libvirtd)
      Tasks: 21 (limit: 32768)
     Memory: 321.3M
        CPU: 1.741s
     CGroup: /system.slice/libvirtd.service
             ├─ 1451 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/us>
             ├─ 1455 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/us>
             └─32901 /usr/sbin/libvirtd --timeout 120

Terraform init

Next, paste this into the main.tf:

terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
    }
  }
}

provider "libvirt" {
  ## Configuration options
  uri = "qemu:///system"
}

First, we point our libvirt provider and it's source in registry. Then we point an URI of our QEMU host. For more information about the provider, please visit its documentation.

Thereafter we have to init an environment:

❯ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of dmacvicar/libvirt...
- Installing dmacvicar/libvirt v0.7.1...
- Installed dmacvicar/libvirt v0.7.1 (self-signed, key ID 96B1FE1A8D4E1EAB)

Partner and community providers are signed by their developers.
(...)

Now it's time to provision our Virtual Machine with infrastructure as a code.

At first, we need to ensure that we have mkisofs installed (it is needed for cloud-init):

sudo apt install mkisofs -y

Let's write our VM

Then we define our VM:

❯ code libvirt.tf

# VM Volume
resource "libvirt_volume" "buster-qcow2" {
  name = "buster.qcow2"
  # In order to list available storage pools - use: `virsh pool-list`
  pool   = "default"
  source = "http://cdimage.debian.org/images/cloud/OpenStack/10.13.16-20230701/debian-10.13.16-20230701-openstack-amd64.qcow2"
  format = "qcow2"
}

# For more info about paramater check this out
# https://github.com/dmacvicar/terraform-provider-libvirt/blob/master/website/docs/r/cloudinit.html.markdown
# Use CloudInit to add our ssh-key to the instance
# you can add also meta_data field
resource "libvirt_cloudinit_disk" "commoninit" {
  name      = "commoninit.iso"
  user_data = data.template_file.user_data.rendered
}

data "template_file" "user_data" {
  template = file("${path.module}/cloud_init.cfg")
}

# Main VM Config
resource "libvirt_domain" "buster" {
  name   = "buster"
  memory = "2048"
  vcpu   = 2

  # Uncomment if you don't want to pass CPU through.
  cpu {
    mode = "host-passthrough"
  }

  # Network config.
  # You can list networks with: `virsh net-list`
  network_interface {
    network_name   = "default"
    hostname       = "buster"
    addresses      = ["10.0.241.113"]
    mac            = "b4:45:06:56:df:48"
    wait_for_lease = true
  }

  # Basic harddrive setup (inherit from qcow2 image).
  disk {
    volume_id = libvirt_volume.buster-qcow2.id
  }

  # The optional console block allows you to define a console for the domain.
  console {
    type        = "pty"
    target_type = "serial"
    target_port = "0"
  }

  # The optional graphics block allows you to override the default graphics settings.
  graphics {
    # Type of graphics emulation (default is "spice")
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }
}

# Print VM IP
output "ip" {
  value = libvirt_domain.buster.network_interface.0.addresses.0
}

You can obtain other qcow2 images here or here.

Now it's high time to ave file and create additional cloud-init file:

❯ code cloud_init.cfg

And paste:

#cloud-config
# vim: syntax=yaml
#
# ***********************
#       ---- for more examples look at: ------
# ---> https://cloudinit.readthedocs.io/en/latest/topics/examples.html
# ******************************
#
# This is the configuration syntax that the write_files module
# will know how to understand. encoding can be given b64 or gzip or (gz+b64).
# The content will be decoded accordingly and then written to the path that is
# provided.
#
# Note: Content strings here are truncated for example purposes.
ssh_pwauth: True
chpasswd:
  list: |
     root:terraform
  expire: False

packages:
 - unzip
 - git
 - wget
 - apt-transport-https
 - software-properties-common

Then, use terraform plan:

❯ terraform plan
data.template_file.user_data: Reading...
data.template_file.user_data: Read complete after 0s [id=0712b4799d3f7f8072d2636ecb314c6507fd1936c912e129cfcc41bc54f2a8df]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # libvirt_cloudinit_disk.commoninit will be created
  + resource "libvirt_cloudinit_disk" "commoninit" {
      + id        = (known after apply)
      + name      = "commoninit.iso"
      + pool      = "default"
      + user_data = <<-EOT
            #cloud-config
            # vim: syntax=yaml
            #
            # ***********************
            #     ---- for more examples look at: ------
            # ---> https://cloudinit.readthedocs.io/en/latest/topics/examples.html
            # ******************************
            #
            # This is the configuration syntax that the write_files module
            # will know how to understand. encoding can be given b64 or gzip or (gz+b64).
            # The content will be decoded accordingly and then written to the path that is
            # provided.
            #
            # Note: Content strings here are truncated for example purposes.
            ssh_pwauth: True
            chpasswd:
              list: |
                 root:terraform
              expire: False

            packages:
             - unzip
             - git
             - wget
             - apt-transport-https
             - software-properties-common
        EOT
    }

  # libvirt_domain.buster will be created
  + resource "libvirt_domain" "buster" {
      + arch        = (known after apply)
      + autostart   = (known after apply)
      + emulator    = (known after apply)
      + fw_cfg_name = "opt/com.coreos/config"
      + id          = (known after apply)
      + machine     = (known after apply)
      + memory      = 2048
      + name        = "buster"
      + qemu_agent  = false
      + running     = true
      + vcpu        = 2

      + console {
          + source_host    = "127.0.0.1"
          + source_service = "0"
          + target_port    = "0"
          + target_type    = "serial"
          + type           = "pty"
        }

      + cpu {
          + mode = "host-passthrough"
        }

      + disk {
          + scsi      = false
          + volume_id = (known after apply)
        }

      + graphics {
          + autoport       = true
          + listen_address = "127.0.0.1"
          + listen_type    = "address"
          + type           = "spice"
        }

      + network_interface {
          + addresses      = [
              + "10.0.241.113",
            ]
          + hostname       = "buster"
          + mac            = "b4:45:06:56:df:48"
          + network_id     = (known after apply)
          + network_name   = "default"
          + wait_for_lease = true
        }
    }

  # libvirt_volume.buster-qcow2 will be created
  + resource "libvirt_volume" "buster-qcow2" {
      + format = "qcow2"
      + id     = (known after apply)
      + name   = "buster.qcow2"
      + pool   = "default"
      + size   = (known after apply)
      + source = "http://cdimage.debian.org/images/cloud/OpenStack/10.13.16-20230701/debian-10.13.16-20230701-openstack-amd64.qcow2"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = "10.0.241.113"

Finish

And type terraform apply if you think it's ready:

❯ terraform apply
data.template_file.user_data: Reading...
data.template_file.user_data: Read complete after 0s [id=4d2e330a4ab32c0d9a2f913145df48c17794872d130a049401dad95d4e976ce4]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

(...)

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip = "10.0.241.113"

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: **yes**

Type yes and wait for the results:

libvirt_cloudinit_disk.commoninit: Creating...
libvirt_volume.buster-qcow2: Creating...
libvirt_cloudinit_disk.commoninit: Still creating... [10s elapsed]
libvirt_volume.buster-qcow2: Still creating... [10s elapsed]
libvirt_volume.buster-qcow2: Creation complete after 14s [id=/var/lib/libvirt/images/buster.qcow2]
libvirt_cloudinit_disk.commoninit: Creation complete after 14s [id=/var/lib/libvirt/images/commoninit.iso;5eff7855-87f5-4772-828d-b933cd5156de]
libvirt_domain.buster: Creating...
libvirt_domain.buster: Creation complete after 7s [id=c2013280-b27d-404c-b26c-040a5e5c1c11]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

ip = "10.0.241.113"

Now, your machine is ready to work with.

Of course it is a simple example, but gives you an idea how it can be done.

In future there will be more Terraform articles with more advanced structure.

Have fun!

0
Subscribe to my newsletter

Read articles from Mike Logaciuk directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mike Logaciuk
Mike Logaciuk