Step-by-Step Guide to Creating Azure Resources with Terraform and GitHub


Here’s a step-by-step guide to creating Azure resources using Terraform and GitHub, using GitHub Actions for automation.
Links:
Install | Terraform | HashiCorp Developer
hashicorp/azurerm | Terraform Registry
🚀 Step-by-Step Guide: Azure + Terraform + GitHub
🧰 Prerequisites
Azure Subscription
GitHub Account
Terraform installed locally (for testing/debugging)
Azure CLI installed (for initial setup)
Visual Studio Code (VSC): Installed on your local machine
Here's my Terraform code tidied up for better readability, consistent formatting, and indentation:
hclCopyEditterraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "4.21.1"
}
}
}
provider "azurerm" {
features {}
subscription_id = "your subscriptionID"
}
resource "azurerm_resource_group" "test" {
name = "acctestrg"
location = "West US 2"
}
resource "azurerm_virtual_network" "test" {
name = "acctvn"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
}
resource "azurerm_subnet" "test" {
name = "acctsub"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.test.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_public_ip" "test" {
name = "publicIPForLB"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
allocation_method = "Static"
}
resource "azurerm_lb" "test" {
name = "loadBalancer"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
frontend_ip_configuration {
name = "publicIPAddress"
public_ip_address_id = azurerm_public_ip.test.id
}
}
resource "azurerm_lb_backend_address_pool" "test" {
name = "BackEndAddressPool"
loadbalancer_id = azurerm_lb.test.id
}
resource "azurerm_network_interface" "test" {
count = 2
name = "acctni${count.index}"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
ip_configuration {
name = "testConfiguration"
subnet_id = azurerm_subnet.test.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_managed_disk" "test" {
count = 2
name = "datadisk_existing_${count.index}"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
storage_account_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 1023
}
resource "azurerm_availability_set" "avset" {
name = "avset"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
platform_fault_domain_count = 2
platform_update_domain_count = 2
managed = true
}
resource "azurerm_virtual_machine" "test" {
count = 2
name = "acctvm${count.index}"
location = azurerm_resource_group.test.location
availability_set_id = azurerm_availability_set.avset.id
resource_group_name = azurerm_resource_group.test.name
network_interface_ids = [element(azurerm_network_interface.test.*.id, count.index)]
vm_size = "Standard_DS1_v2"
# Uncomment to delete disks on VM deletion
# delete_os_disk_on_termination = true
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk${count.index}"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
storage_data_disk {
name = "datadisk_new_${count.index}"
managed_disk_type = "Standard_LRS"
create_option = "Empty"
lun = 0
disk_size_gb = 1023
}
storage_data_disk {
name = element(azurerm_managed_disk.test.*.name, count.index)
managed_disk_id = element(azurerm_managed_disk.test.*.id, count.index)
create_option = "Attach"
lun = 1
disk_size_gb = element(azurerm_managed_disk.test.*.disk_size_gb, count.index)
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
Open powersell as Administrator. Intall terraform
If not installed then run these code following steps:
The error you're seeing indicates that the
choco
command (from Chocolatey, a Windows package manager) is either not installed or not added to your system'sPATH
.✅ Step 1: Install Chocolatey
Open PowerShell as Administrator.
Run the following command to install Chocolatey:
powershellCopyEditSet-ExecutionPolicy Bypass -Scope Process -Force; `
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; `
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
This will download and install Chocolatey.
✅ Step 2: Verify Chocolatey Installation
After installation completes:
Close and reopen PowerShell.
Run:
powershellCopyEditchoco --version
If it returns a version number, Chocolatey is installed correctly.
✅ Step 3: Install Terraform
Now you can install Terraform with:
powershellCopyEditchoco install terraform
Would you like an alternative method to install Terraform without using Chocolatey?
- Install Terraform On Windows
A. Go to the official Terraform Website as seen in the Image - "Select" "Windows" Operating System to install Terraform
B. "Select" "Windows" Operating Systems and "Click On" "386 version"
Launc VScode and create Terraform Extension
Test to see that Terraform is fully Installed
Create a New file named main.tf and Insert terraform default code from hashicorp/azurerm | Terraform Registry
Goto Portal.azure and copy Subscription and paste on VScode
Go to Terminal, Open new Terminal and run the code: terraform init
Copy and paste te remainin code from terrafrom displayed above from line 15
On termininal run: terraform plan -out main.tf
Step 1: 🔐 Create an Azure Service Principal
This is used by Terraform in GitHub Actions to authenticate to Azure.
bashCopyEditaz ad sp create-for-rbac --name "terraform-sp" --role="Contributor" \
--scopes="/subscriptions/<SUBSCRIPTION_ID>" \
--sdk-auth
Replace
<SUBSCRIPTION_ID>
with your Azure subscription ID.
Copy the JSON output. You’ll save this in GitHub later as a secret.
Step 2: 📁 Prepare Your Terraform Configuration
Here’s an example main.tf
to deploy a basic Azure Resource Group:
hclCopyEditprovider "azurerm" {
features = {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "East US"
}
Other common files:
variables.tf
: define variables.terraform.tfvars
: set values for variables.outputs.tf
: define output values.
Step 3: 💾 Push Terraform Code to GitHub
Structure:
bashCopyEdit/my-terraform-repo
├── main.tf
├── variables.tf
├── terraform.tfvars
└── outputs.tf
Commit and push this to a GitHub repo.
Step 4: 🔐 Add GitHub Secrets
In your GitHub repo:
Go to Settings → Secrets and variables → Actions
Add a secret:
Name:
AZURE_CREDENTIALS
Value: Paste the full JSON output from Step 1
Step 5: ⚙️ Create a GitHub Actions Workflow
Create .github/workflows/deploy.yml
:
yamlCopyEditname: Terraform Deploy
on:
push:
branches:
- main
jobs:
terraform:
name: 'Terraform Plan and Apply'
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Terraform
uses: hashicorp/setup-terraform@v3
- name: Terraform Init
run: terraform init
- name: Terraform Plan
run: terraform plan
- name: Terraform Apply
run: terraform apply -auto-approve
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
Step 6: ✅ Commit & Push Workflow
Push the workflow file to GitHub. It will automatically run on every push to the main
branch.
Step 7: 🎉 Monitor Workflow & Validate in Azure
Go to Actions tab in GitHub to monitor the deployment.
Go to the Azure portal and confirm the resource group (or other resources) was created.
Would you like an example that includes more advanced Terraform modules (like creating an AKS cluster or Azure App Service)?
Subscribe to my newsletter
Read articles from Daniel Erebi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Daniel Erebi
Daniel Erebi
Daniel Erebi is a dynamic and forward-thinking professional currently making strides as an Intern in Cloud Computing Engineering, driven by a passion for innovation and a commitment to transitioning into the tech industry. With a robust foundation as an Automation Engineer in the oil and gas sector, Daniel brings a unique perspective to cloud computing, combining hands-on technical expertise with a strategic mindset honed in high-stakes industrial environments. A registered Electrical/Electronic Engineer holding a Master’s Degree, Daniel has consistently demonstrated excellence in problem-solving, systems optimization, and cross-disciplinary collaboration. My Experience in automation engineering involved designing and implementing advanced control systems, fostering skills in programming, data analysis, and workflow automation—competencies now being leveraged to bridge the gap between industrial engineering and cutting-edge cloud solutions. Currently focused on mastering cloud infrastructure, Daniel excels at evaluating and recommending cloud service providers, tools, and architectures to optimize performance, security, and cost-efficiency. My proficiency in programming languages and DevOps practices positions me to design scalable, resilient cloud systems, while my analytical approach ensures alignment with organizational goals. Committed to becoming a top-notch Cloud Engineer, Daniel Erebi is actively expanding his knowledge in platforms like AWS, Azure, and Google Cloud, as well as technologies such as Kubernetes, Terraform, and CI/CD pipelines. My engineering background, paired with a relentless curiosity for emerging tech, equips me to tackle complex challenges in cloud migration, hybrid environments, and IoT integration. Daniel Erebi is driven by the belief that the fusion of technical skill, industry experience, and lifelong learning is key to shaping the future of cloud computing. With a proven track record of adaptability and a vision for innovation, Daniel Erebi is poised to make meaningful contributions to the tech landscape. Thank you for your interest in Daniel Erebi's journey.