Setting Up E-commerce on Google Compute Engine: A Guide to Automation with Terraform and Ansible

bablubablu
8 min read

You've landed your dream job as a sysadmin in the tech world of the late 1990s. Your company, a leading electronics retailer, aims to sell electronics online, pioneering e-commerce when dial-up was common and "the cloud" referred to weather forecasts.

Alright, let's get into our '90s groove! We'll use Google Cloud's tools, especially Google Compute Engine (GCE), to have powerful, scalable servers without needing physical gear. This lets us use Infrastructure as Code (IaC) tools like Terraform and Ansible to easily set up and configure our virtual machines, giving us an edge in managing our e-commerce platform.

To start this deployment adventure, you'll need a Google Cloud account for access to virtual servers on Google Compute Engine and a domain name for your online store. For Infrastructure as Code tasks, we'll use Google Cloud Shell, a browser-based command-line environment with essential tools like Terraform. We'll install Ansible in Cloud Shell to script our e-commerce setup.

Setting Our Infra with Terraform

Now, let's set up our e-commerce operation. No need for cables or server racks; we're using Terraform. Terraform acts as our architect, building everything on Google Compute Engine without physical machines. We'll specify the virtual servers we want, their connections, and the needed resources.

For our first step into online electronics sales, we'll set up a single Google Compute Engine virtual machine as our all-in-one e-commerce hub. This means our web server, application logic, and database will all be on one machine, making our launch easier and faster. In a future article, we'll explore expanding with dedicated servers as we grow.

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "6.35.0" 
    }
  }
}

provider "google" {
  project = "<provide projectID>" 
  region  = "us-central1"         
  zone    = "us-central1-a"       
}

This script sets up a Terraform configuration to use Google's computing services. The terraform { ... } section specifies the use of Google's provider tools from HashiCorp. The provider "google" { ... } section directs the setup to a specific Google project and location, ensuring the virtual machine has a designated spot in Google's data centers.

resource "google_compute_network" "custom_vpc" {
  name                    = "custom-vpc-network"
  auto_create_subnetworks = false // We will create a custom subnet
  routing_mode            = "REGIONAL" 
}
resource "google_compute_subnetwork" "custom_subnet" {
  name          = "custom-subnet-us-central1"
  ip_cidr_range = "10.0.10.0/29" 
  network       = google_compute_network.custom_vpc.id
  region        = "us-central1"    
}

The google_compute_subnetwork resource block creates a segment within our custom-vpc-network. The name = "custom-subnet-us-central1" labels it as our sub-network in the us-central1 region. The ip_cidr_range = "10.0.10.0/29" assigns a specific range of IP addresses for devices in this sub-network, ensuring dedicated space without conflicts in Google's network.

The line network = google_compute_network.custom_vpc.id links the sub-network to the custom network we set up, telling Terraform to place custom_subnet inside custom_vpc_network. This feature allows us to reference newly created resources, making the setup interconnected.

The region = "us-central1" ensures the sub-network is in the us-central1 area, like our main network.

By setting up custom_vpc and custom_subnet, we create a secure and organized environment for our e-commerce server. With our custom network and sub-network ready, the next step is managing access to our e-commerce server. This is where the firewall comes in, controlling what digital traffic can reach our server.

resource "google_compute_firewall" "allow_custom_traffic" {
  name    = "fw-allow-ssh-http-custom-tcp"
  network = google_compute_network.custom_vpc.id

  allow {
    protocol = "tcp"
    ports    = ["22", "80", "3306"] 
  }

  source_ranges = ["0.0.0.0/0"] 
}

This resource block, google_compute_firewall, sets up our network's entry rules.

The name = "fw-allow-ssh-http-custom-tcp" labels the firewall rules for allowed traffic.

The network = google_compute_network.custom_vpc.id connects these rules to our custom network. The allow { ... } block specifies allowed traffic using protocol = "tcp". For ports, we've listed ["22", "80", "3306"].

  • Port 22: Used for SSH (Secure Shell), allowing sysadmins to securely access the server for maintenance and software installation.

  • Port 80: Used for HTTP (Hypertext Transfer Protocol), enabling customers to browse and shop on our e-commerce site.

  • Port 3306: The default port for MySQL, ensuring our web application can access the database on the same server.

The source_ranges = ["0.0.0.0/0"] setting allows anyone on the internet to access these ports. However, while useful for our early 90s scenario, leaving ports 22 and 3306 open is a major security risk today. It's like leaving your doors wide open. In our next article, we'll improve security by restricting access to these sensitive ports, ensuring our e-commerce store is safer for both our business and customers.

Fantastic! We've set up our digital space (VPC), created a special area (subnet), and added a basic security guard (firewall). Now, it's time for the main attraction: the server, the "on-prem" machine that will host our e-commerce operation. This is where we define the core of our online electronics store.

resource "google_compute_instance" "centos_vm" {
  name         = "my-centos-vm"
  machine_type = "n2-standard-2"
  zone         = "us-central1-a" 

  boot_disk {
    initialize_params {
      image = "centos-cloud/centos-stream-9" 
      size  = 20
    }
  }

  network_interface {
    subnetwork = google_compute_subnetwork.custom_subnet.id
    access_config {
    }
  }

  scheduling {
    automatic_restart   = true
    on_host_maintenance = "MIGRATE"
  }
}

This resource block, google_compute_instance, asks Google Compute Engine to create a new virtual machine.

The name = "my-centos-vm" gives our server a friendly name.

machine_type = "n2-standard-2" selects a general-purpose machine type, offering balanced processing power and memory, suitable for managing our website and database on one machine.

The zone = "us-central1-a" ensures our server is set up in the central US region.

The boot_disk { ... } section handles the server's startup drive. The initialize_params set up this drive with image = "centos-cloud/centos-stream-9", installing CentOS Stream 9. CentOS is a stable Linux OS, popular for servers. The size = 20 gives us a 20 GB main hard drive, ample for our e-commerce needs.

The network_interface { ... } block connects our server to our custom network. The line subnetwork = google_compute_subnetwork.custom_subnet.id links the server to our custom_subnet, keeping it in our private network. The access_config { } request a public internet address for easy website access.

The scheduling { ... } section manages server uptime. With automatic_restart = true, the server restarts automatically after crashes or maintenance. on_host_maintenance = "MIGRATE" moves the server to another machine during updates, ensuring no downtime and reliable service.

With this google_compute_instance, we've set up the core of our e-commerce infrastructure, ready to go live and sell electronics globally!

Bravo! We've built our virtual server and network. But how do customers find our online electronics store using a friendly name like "exampleelectronics.com" instead of an IP address? This is where the Domain Name System (DNS) comes in, acting like the internet's phonebook. We need to ensure that when someone types our domain name, they are directed to our new server on Google Compute Engine.

resource "google_dns_managed_zone" "primary_zone" {
  name = "primary-zone"
  dns_name = "example.com."
  description = "A primary managed zone for example.com"
  visibility = "public"
  force_destroy = true
  labels = {
    "environment" = "production"
  }
}

# "A" Record for the root domain (example.com)
resource "google_dns_record_set" "root_a_record" {
  name = google_dns_managed_zone.primary_zone.dns_name
  type = "A"
  ttl = 300
  managed_zone = google_dns_managed_zone.primary_zone.name
  rrdatas = ["external IP"]
}

# "CNAME" Record for the "www" subdomain (www.example.com)
resource "google_dns_record_set" "www_cname_record" {
  name = "www.${google_dns_managed_zone.primary_zone.dns_name}"
  type = "CNAME"
  ttl = 300
  managed_zone = google_dns_managed_zone.primary_zone.name
  rrdatas = [google_dns_managed_zone.primary_zone.dns_name]
}

The resource "google_dns_managed_zone" "primary_zone" { ... } sets up a "managed zone," like adding an entry to the Internet's phonebook. Use dns_name = "example.com."—replace example.com with your domain and keep the dot exampleelectronics.com.. This tells Google Cloud DNS to handle your domain's requests. visibility = "public" ensures it's accessible online. force_destroy = true helps with cleanup when using terraform destroy, and labels keeps resources organized.

Next, we define the actual "phonebook entries" or records that tell people where your domain points.

The resource "google_dns_record_set" "root_a_record" { ... } creates an "A" record. It links your main domain (like exampleelectronics.com) to your server's IP address. The name refers to the dns_name in primary_zone. type = "A" specifies it's an A record. ttl = 300 sets how long systems remember this address (300 seconds). managed_zone ties it to the main domain. Replace "external IP" in rrdatas = ["external IP"] with your VM public IP.

Finally, the resource "google_dns_record_set" "www_cname_record" { ... } sets up a "CNAME" record for the "www" version of your domain, like www.exampleelectronics.com. Instead of linking directly to an IP address, it creates an alias, directing traffic to wherever exampleelectronics.com points. This is useful because if your server's IP changes, you only update the main domain's "A" record, and the "www" CNAME automatically follows.

After applying this Terraform code, log in to your domain registrar and update the "nameservers" to those provided by Google Cloud DNS. Google will give you a list of nameservers once your managed zone is created. This step lets the Internet know that Google Cloud DNS manages your domain. After a brief wait for DNS propagation, typing your domain name will direct you to your new e-commerce server!

What's Next: Equipping Our E-commerce Powerhouse with Ansible

Our e-commerce server is ready on Google Compute Engine, connected to our network, protected by a firewall, and accessible via DNS. However, it's currently just an empty setup without a web server, database, or e-commerce application.

This is where Ansible shines. Next, we'll use Ansible to transform our CentOS virtual machine into a complete online store. We'll create Ansible playbooks to install the web server Apache, set up a MariaDB database, and deploy the e-commerce application. This will automate the process, turning code into a live online store efficiently. Stay tuned as we harness the power of configuration management.

0
Subscribe to my newsletter

Read articles from bablu directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

bablu
bablu