How to Provision Azure Infrastructure Using Terraform and GitHub CI/CD


Introduction
In today’s DevOps world, automation is everything. If you’re tired of clicking through the Azure portal to set up resources manually, this post is for you. We’ll walk through how to use Terraform and GitHub CI/CD pipelines to provision infrastructure on Microsoft Azure — all from your terminal and VS Code.
Whether you're a beginner or brushing up your Infrastructure as Code (IaC) skills, this guide takes you from zero to a fully provisioned Azure environment using clean, modular Terraform code and GitHub integration. Let’s get into it.
Step 1.
We will download Terraform on our Local Host and install the exe , but if you have chocolatey install on your local host, you can use the choco command to install it, for the Course of this project i will be using the choco command to install our Terraform . open the Power shell as admin and Run the command "choco install terraform" wait for it to install successfully. Then on your Local host on the search Bar search for Terraform, right click on it. click on open file location right click on your "terraform.exe" and copy as Path after copying , on the search bar on your local host search for Edit system Environment Variable., Under the Advance Tab Click on Environment Variables. then Under the System Variable , Click on Path. scroll down and Press Edit. After clicking Edit, Click on add New then paste the Terraform Path you copied earlier int the box for new variable and click on OK. you Have successfully added Terraform as a New Environment Variable. On your power shell run command terraform --version to check for the version installed.
Step 2.
We will repeat this Installation Step for our Linux Environment , for the course of this project i will be using an Ubutu Environment on my Virtual Box, Command to Install Terraform on Ubuntu is we Open Ubuntu terminal and Run command " wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt update && sudo apt install terraform " then after installing we will check the Version installed with command "terraform --version "
Step 3.
We will open our VSCODE and Search for Terraform in the extension Market place, then Install the Hashicorp Terraform. then login to our GitHub account and Create a repository for this Project. we will name the repository " Terraform-Project " make it Public , and add a readme file then click Create. then Head back to our VSCODE to clone our GitHub repository. click on Clone repository then click on Clone from GitHub. then follow the authorize your GitHub account to Sync with VScode by logging in and Click on authorize. Once GitHub and VSCode are successfully Authorized you will see the Repository you want to Clone in the top part of the VSCODE , then Create a New Folder that will house the Cloned Repository on your local Machine, I will give my folder "Terraform Repo" as my folder name , then Press enter to Clone Successfully.
Step 4.
We will Navigate to Explorer on VSCODE and create a file called " main.tf " We will be using Azure as our cloud Provider for today's project, head to your web browser and open this Link to view the AzureRM Provider : https://registry.terraform.io/providers/hashicorp/azurerm/latest then click on use Provider then copy the code visible , then head back to our VSCODE software and paste the code inside the file main.tf we created earlier After pasting the code remove the " # configuration option on line 10 of the code we just pasted . change it to " features {} then head to line 11 add a subscription id from your azure account after entering that on line 12 add a Closing curly brace (}) we save with CTRL +S
Then open a terminal inside VSCODE and run command " terraform init " you will get a message with " Terraform has been successfully initialized!"
So we Proceed, On line 14 we will paste this code resource "azurerm_resource_group" "test" { This line defines a Resource Group resource in Terraform. The "test" part is not an actual Resource Group in Azure; it is simply a Terraform reference name (also called a local name or nickname). This name allows Terraform to keep track of this resource within the code and to reference it later when linking other resources to this Resource Group. name = "acctestrg" This specifies the actual name of the Resource Group that will be created in Azure, which is "acctestrg". This is the name you will see in the Azure portal.
On line 16: location = "West US 2" This tells Terraform to create the Azure Resource Group in the West US 2 region.
In summary: We are creating only one Resource Group in Azure, named acctestrg. The "test" part is just Terraform’s internal reference name and does not create an extra Resource Group. So despite seeing two different names ("test" and "acctestrg"), only one Resource Group is actually created in Azure.
Then we move to line 19 of our code which has resource "azurerm_virtual_network" "test" { This line tells Terraform to create a Virtual Network in Azure. The "test" part is not a separate Virtual Network, it is simply a local name (nickname) that Terraform uses inside the code to keep track of this resource. It allows us to reference this Virtual Network elsewhere in the code, but it does not create a second VNet.
On line 20, we see: name = "acctvn" This line specifies the actual name of the Virtual Network that will be created in Azure, which is acctvn. This is the name that will show up in the Azure portal once the infrastructure is deployed, We are creating only one Virtual Network in Azure, named acctvn. The "test" part is just a Terraform reference name used in our code and does not create a separate Virtual Network.
So for Line 21 we have address_space = ["10.0.0.0/16"] This defines the IP address range for your Virtual Network (VNet). The value ["10.0.0.0/16"] means you’re creating a VNet that covers the CIDR block from 10.0.0.0 to 10.0.255.255.
Then for line 22 location = azurerm_resource_group.test.location Here we are telling Terraform: “Make sure this VNet is created in the same region as the Resource Group Instead of hard-coding the region (like "West US 2" again), you’re dynamically referencing the location of the Resource Group you already defined earlier.
Then for Line 23 which has resource_group_name = azurerm_resource_group.test.name , this code links our VNet to the correct Resource Group, Again, instead of hardcoding "acctestrg", you’re dynamically pulling the name of our Resource Group : in essence it saying Place the VNet inside the Resource Group I created earlier (the one we named acctestrg in Azure).
From Line 26-31
has this code resource "azurerm_subnet" "test" {
name = "acctsub"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.test.name
address_prefixes = ["10.0.2.0/24"]
}
What’s this? We are creating a Subnet inside your Virtual Network, like we said earlier The "test" part is not a separate subnet, it is simply a local name (nickname) that Terraform uses inside the code to keep track of this resource name = "acctsub this is the name we are giving the subnet in Azure,
for this code resource_group_name = azurerm_resource_group.test.name this code links our subnet to the correct Resource Group, Again, instead of hardcoding "acctestrg", we are dynamically pulling the name of our Resource Group : in essence it saying Place the subnet inside the Resource Group we created earlier (the one we named acctestrg in Azure)
Then for the code : virtual_network_name = azurerm_virtual_network.test.name same goes for this code as earlier specified. i.e place the subnet inside the Vitual Network we created earlier (the one we named acctvn in Azure) Address range: 10.0.2.0/24 (this is the smaller network block inside your VNet).
Then for Line 33 to Line 38
we have this code resource "azurerm_public_ip" "test" {
name = "publicIPForLB"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
allocation_method = "Static"
}
What we are doing here is creating a Public IP address that can be used for things like a Load Balancer. Our Public IP name is "publicIPForLB" then Create the Public IP in the same region as the Resource Group, Create the Public IP in the same region as the Resource Group, Attach this Public IP to the Resource Group. then our publicIP allocation method is static, meaning the IP won't change after it’s created.
Then for Line 40 to 49 we have this code;
resource "azurerm_lb" "test" {
name = "loadBalancer"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
frontend_ip_configuration {
name = "publicIPAddress"
public_ip_address_id = azurerm_public_ip.test.id
}
}
What is this ? it says we are creating a Load Balancer then the Name we are giving is " loadBalancer " and It’s linked to our Public IP created earlier, then Put the Load Balancer in the same region as the Resource Group, also Place the load balancer inside the Resource Group
Then for Line 51 to 54
We have these base codes.
resource "azurerm_lb_backend_address_pool" "test" {
loadbalancer_id = azurerm_lb.test.id
name = "BackEndAddressPool"
}
What this means is that, it Creates a Backend Pool where your VMs will sit behind the Load Balancer, the name is " BackEndAddressPool" please note No location or resource_group_name here because it’s linked directly to the Load Balancer.
Then for Line 56- 67 we have these codes for the Network Interface NIC):
resource "azurerm_network_interface" "test" {
count = 2 name = "acctni${count.index}"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
ip_configuration {
name = "testConfiguration"
subnet_id = azurerm_subnet.test.id
private_ip_address_allocation = "Dynamic"
}
}
What this does is to Creates 2 NICs counts (network cards), these named acctni0 and acctni1. will be created inside Azure. location = azurerm_resource_group.test.location : Put the NICs in the same region as the Resource Group, resource_group_name = azurerm_resource_group.test.name : also Place the NICs inside the Resource Group, the name of the Ip config will be " testConfiguration" then the Private Ip Address allocation will be Dynamic.
Then for Line 69-77 of our codes. We have the following
resource "azurerm_managed_disk" "test" {
count = 2
name = "datadisk_existing_${count.index}"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = "1023"
}
What this means is that , we created 2 empty data disks to attach to VMs.
Then name is name = "datadisk_existing_${count.index}"which is datadisk_existing_0 and datadisk_existing_1
location = azurerm_resource_group.test.location means Put the data disks in the same region as the Resource Group
While resource_group_name = azurerm_resource_group.test.name States that we also Place the data disks inside the Resource Group storage_account_type = "Standard_LRS" The Storage account type is Standard LRS, create_option = "Empty shows the data disks are gonna be empty at creation Then disk_size_gb = "1023" shows us the Disks size.
So for Line 79-86 we have the following :
resource "azurerm_availability_set" "avset" {
name = "avset"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
platform_fault_domain_count = 2
platform_update_domain_count = 2
managed = true
}
This means that we are creating an Availability Set, where resource "azurerm_availability_set" "avset" { : means it is the name we are giving this block inside Terraform, so we can reference it later (like a variable name) .And the name we are using to create it in Azure is "avset" as well then location = azurerm_resource_group.test.location which means that Put the availability set in the same region as the Resource Group while resource_group_name = azurerm_resource_group.test.name States that we also Place the availability set inside the same Resource Group Then platform_fault_domain_count = 2 means we are Making sure our VMs are spread out across multiple racks, So if one rack fails (like a hardware issue), only VMs in that rack go down—the other rack is still up And for platform_update_domain_count = 2 is saying spread our VMs across 2 update domains
Then managed = true: means that it requires that our VMs use managed disks, which is the modern and better way (Azure handles the storage stuff behind the scenes—more reliable and scalable).
Now for Line 88-95 of our codes, which is
resource "azurerm_virtual_machine" "test" {
count = 2
name = "acctvm${count.index}"
location = azurerm_resource_group.test.location
availability_set_id = azurerm_availability_set.avset.id
resource_group_name = azurerm_resource_group.test.name
network_interface_ids = [element(azurerm_network_interface.test.*.id, count.index)]
vm_size = "Standard_DS1_v2"
This means that resource
"azurerm_virtual_machine" "test" { We’re telling Terraform to create a Virtual Machine (VM) in Azure, where, "test" is just Terraform’s nickname (internal reference name) count = 2 This is saying to Create 2 identical VMs, so Terraform will make acctvm0 and acctvm1 (using the count index). name = "acctvm${count.index}" We’re setting the VM names: First VM: acctvm0 and Second VM: acctvm1 please NOTE: The ${count.index} part automatically adds a number (0, 1, etc.) to make each VM name unique. location = azurerm_resource_group.test.location : We’re telling Terraform TO Place the VMs in the same region as the Resource Group, his avoids hardcoding "West US 2" again. availability_set_id = azurerm_availability_set.avset.id MEANS Put these VMs in the Availability Set (avset) we created earlier.” then resource_group_name = azurerm_resource_group.test.name says Put the VMs inside the Resource Group (acctestrg) Again, we’re dynamically grabbing the RG’s name so we don’t have to hardcode "acctestrg. Then for network_interface_ids = [element(azurerm_network_interface.test.*.id, count.index)] it means Each VM needs a Network Interface (NIC) to connect to the network remember We already created 2 NICs earlier: which was "acctni0" and "acctni1" Then for vm_size = "Standard_DS1_v2" Means We’re setting the machine size (specs):Standard_DS1_v2 = 1 vCPU, 3.5 GB RAM.
For line 97-101 its basically an instructions which states :
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
what it means is that If we uncomment these lines, Terraform will automatically delete the disks when we delete the VM. For now, they are commented out.
Then for Line 103-108 , which is
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
This means that we are telling azure with OS to install which is Install Ubuntu Server 16.04 on these VMs.
Then from Line 110-115, which is
storage_os_disk {
name = "myosdisk${count.index}"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
What it means is that We’re setting up the OS Disk and Each VM will have its own OS disk (like the hard drive with the Ubuntu OS on it).
The ${count.index} makes each OS disk unique (e.g., myosdisk0, myosdisk1).
Then caching is ReadWrite, that is the permission.
Create Option is From Image Then our Managed Disk type is going to Be Standard LRS.
Then from Line 117-124, which is
#Optional data Disk
storage_data_disk {
name = "datadisk_new_${count.index}"
managed_disk_type = "Standard_LRS"
create_option = "Empty"
lun = 0
disk_size_gb = "1023"
}
What’s happening? We are adding a new empty data disk (extra storage space) for each VM.
Then name ="datadisk_new_${count.index}" still what we discussed earlier the name will be datadisk_new_0 and datadisk_new_1 Size: 1023 GB. Lun = 0 (this is like saying "Disk 1").
Then from Line 126-132, which is
storage_data_disk {
name = element(azurerm_managed_disk.test.\.name, count.index)*
managed_disk_id = element(azurerm_managed_disk.test.*.id, count.index)
create_option = "Attach"
lun = 1
disk_size_gb = element(azurerm_managed_disk.test.*.disk_size_gb, count.index)
}
What’s happening? We are attaching the existing managed disks we created earlier (the 2 disks named datadisk_existing_0, datadisk_existing_1) Terraform is smart here—it automatically links each VM to its matching disk using the element() and count.index. Lun = 1 (this is like saying "Disk 2").
For Line 134-138 , which is
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
This sets the VM’s login info:
Username: testadmin
Password: Password1234!
Computer name: hostname
For Line 140-147 Which is ,
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
What’s happening? This keeps password login enabled. If you set this to true, it would disable password login and expect SSH keys instead.
for the Tags ={ states We are adding a tag to the VM to label it as part of the "staging" environment. Good for organizing resources in Azure.
Step 5 .
After Analyzing all the code, we will click on Control +S to save the code then we will now go back to the terminal and run this command "az login" so we can login to the azure account we want to push the codes to
Then next thing is to run the command "terraform init" then we will get a Terraform has been successfully initialized! response.
Then next is to Run "terraform plan" this command we are running will only generate a readme for us without doing the main execution for us .
Then We run the Last Command which is "terraform apply" to apply all changes and push our configurations to azure.
After this we can Head to Azure then click on our Resource group we created to View all the configurations we Pushed to Azure.
Step 6.
Commit & Push Your Configuration to GitHub - Navigate to the "Source Control" on the left pane and enter the commit message as " First Initial commit for Terraform" before proceeding to click on the "Commit" button.
ii. "Click On" "Sync Changes" and "Click" "OK"
Step 7:
Open GitHub, navigate to the repository we created earlier which is named " Terraform-Project" , and view the main.tf file and commit message
Conclusion
That’s it —WE have now successfully provisioned Azure infrastructure using Terraform and pushed it to GitHub like a true DevOps engineer. we didn’t just spin up resources manually, we wrote code that Azure understands and used automation to deploy everything efficiently.
By completing this project, we have learned how to:
Install and configure Terraform locally and on a Linux environment
Set up a full infrastructure using Terraform code: Resource Group, VNet, Subnet, Load Balancer, VMs, and more
Connect your local environment to Azure
Use GitHub to version-control your infrastructure as code
Thank you very much Guys.
See you on the Next one.
Subscribe to my newsletter
Read articles from Stillfreddie Techman directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
