Navigating Cloud Horizons
Stepping into the world of DevOps, I found myself at the forefront of a monumental journey: migrating our company's infrastructure from Amazon Web Services (AWS) to Google Cloud Platform (GCP). As a budding DevOps Engineer transitioned from intern to junior level, this migration presents a thrilling opportunity for professional growth and hands-on experience.
During our tenure on AWS, we hosted our backend applications using Amazon ECS, a reliable but familiar framework. However, the decision to transition to GCP led us to embrace Google Kubernetes Engine (GKE), or simply Kubernetes, as our preferred container orchestration platform.
Our migration journey comprises several key segments, each essential for a successful transition:
Database Migration: Shifting our databases from RDS to Google SQL using GCP's intuitive "Database Migration" tool.
Artifact Registry Setup: Which enhances code management and improves version control.
Service Account Creation: Crafting a service account with the necessary permissions for efficient interaction with the Artifact Directory and Kubernetes cluster, particularly crucial for GitHub Actions deployment.
Cluster Setup: Configuring the Kubernetes cluster serves as the backbone of our infrastructure on GCP.
Secrets Management: Employing Secret Manager for secure handling of sensitive information.
Deployment and Service Files Creation: Developing deployment and service files lays the groundwork for application scalability and management.
GitHub Pipelines Setup: Integrating GitHub pipelines with the service account facilitates automated deployments, enhancing our development workflow.
Namespace Creation: Establishing separate namespaces for development and production environments ensures isolated and successful communication within the same namespace.
Ingress Controller Configuration: Implementing an Ingress controller for load-balancing purposes.
Ingress Resource Deployment: Creating and deploying ingress resources within appropriate namespaces enables effective communication with services and exposes applications via the ingress controller's external IP.
SSL Certificate Generation: Securing our applications with SSL certificates ensures encrypted communication and enhanced security for domain names used to access the apps.
These segments constitute the roadmap for our migration journey, guiding us through the intricate process of transitioning from AWS to GKE in GCP. Next, we'll explore each step in detail, understanding how to set them up and why they're important for our migration.
Please take Note: In GCP, before utilizing a feature for the first time, you need to enable its API if it hasn't been enabled already. Keep this in mind as you proceed.
Database Migration
In the process of migrating database from AWS RDS to Google Cloud SQL, you'll navigate to the "Database Migration" section in GCP and create a migration job. This job creation consists of several phases:
Job Initialization:
Provide a name and ID for your migration job.
Define the source (AWS RDS) and destination (Google Cloud SQL) database engines.
Choose the migration type; continuous migration ensures ongoing synchronization until manual disconnection.
Connection Profile Setup:
Create or connect to a connection profile.
If creating a new profile, input details such as name, ID, hostname or IP address, port, username, and password for the source database.
Specify the connection profile region.
Destination Configuration:
Select an existing database instance or create a new one.
If you are creating a new instance, define instance properties and click "create and continue" to instantiate the new database.
Connectivity Method Definition:
- Choose the connectivity method, such as "IP allowlist", to enable incoming connections from the Cloud SQL instance to your database.
Testing and Execution:
- Test the connection to ensure everything is configured correctly. If the test is successful, initiate the migration job to begin the process of transferring data from AWS RDS to Google Cloud SQL.
It's crucial to note that the migration process won't reach completion successfully unless your source database meets specific parameters:
max_replication_slots: It should be configured to at least match the number of databases intended for replication.
max_wal_senders: Ensure it's set to at least the number of databases intended for replication.
max_worker_processes: This setting should also match the number of databases intended for replication.
The database must have the pglogical extension installed.
The 'wal_level' configuration must be set to 'logical'.
Verify that the 'rds.logical_replication' parameter is set to 'on'; otherwise, it may currently be 'off' and could cause operational issues*.*
Artifact Registry Setup
The Artifact Registry is where you store the images of your applications before deploying them to your cluster. Here's how you set it up:
Creating Repositories:
Click on the plus (+) button to initiate the creation of a new repository.
Provide a name that reflects the purpose or content of the repository.
Choose the format of the images you'll be storing; typically, it's Docker for containerized applications.
Select the mode of access.
Specify the type of location and the region(s) where you want your repository to be stored.
Optionally, add a cleanup policy to automatically remove unused images, if necessary.
Once configured, proceed to create the repository.
Service Account Creation
Service account creation, particularly in our context, is crucial for facilitating GitHub actions to interact effectively with Google Cloud Platform (GCP). Let's break down the process in simple terms.
Purpose of Service Account:
We create a service account primarily to empower GitHub actions with the necessary permissions to access and modify resources within GCP. These permissions include tasks like:
Retrieving secrets stored in the Secret Manager.
Making changes to the Artifact Repository.
Managing configurations within our Kubernetes cluster, among other functions.
Granting Permissions:
We ensure that the service account is granted all the essential permissions required for our deployment workflow to function smoothly.
Key Creation and Security:
To authenticate GitHub actions with GCP, we generate a special key for the service account. This key is securely stored within GitHub secrets to prevent unauthorized access.
However, it's important to note that keys can pose security risks if they fall into the wrong hands.
Exploring Alternative Methods:
In addition to using keys, alternative methods like "OpenID Connect" can be explored for integrating GitHub actions with the service account. OpenID Connect operates without the need for a key, reducing the risk of unauthorized access.
Cluster Setup (GKE)
Setting up a cluster in Google Cloud Platform (GCP) is a pivotal step in preparing your infrastructure for deployment. Let's explore the two types of clusters available:
Autopilot Cluster:
An Autopilot cluster is designed to simplify cluster management by automating resource provisioning and management. It automatically adjusts the size and configuration of nodes based on workload requirements, optimizing resource utilization. This type of cluster is ideal for users who prioritize simplicity and automation in managing their infrastructure.
Standard Cluster:
A Standard cluster provides more granular control over cluster configuration and resource allocation. Users have the flexibility to customize node configurations, including machine type, disk size, and node pool configurations. This type of cluster is suitable for users who require fine-tuned control over their infrastructure and specific performance requirements.
Key Difference:
The primary difference between Autopilot and Standard clusters lies in the level of automation and control they offer. Autopilot clusters prioritize simplicity and automation, while Standard clusters provide more customization and control options. Once you've considered the characteristics of each cluster type and determined which best fits your use case, you can proceed to select your preferred cluster type and configure it according to your requirements.
After creating your cluster, you can connect your local terminal to it to execute 'kubectl' commands using the following command below. However, make sure you have kubectl installed:
gcloud container clusters get-credentials [CLUSTER_NAME] --zone [ZONE] --project [PROJECT_ID]
Beforehand, ensure you have authenticated your local terminal with your GCP account using:
gcloud auth login
This straightforward process enables you to bridge your local development environment with your GCP cloud infrastructure, empowering you to manage and deploy applications with ease from your local terminal. With your cluster set up, you're now ready to proceed with deployments!
Secrets Management
When managing secrets in Google Kubernetes Engine (GKE), you have two main options: utilizing Secret Manager in GCP or leveraging "configmaps and secrets" within the cluster itself. Let's explore how each method works:
Secret Manager:
Secret Manager allows you to securely store and manage sensitive information such as API keys, passwords, and certificates. Here's how it operates:
Create a secret with a meaningful name and upload a file containing your environment variables or manually input them in the provided text box.
During your workflow's build phase, pull the created secret and include it as part of your image's packages.
For instance, in a GitHub action, you might have a step like this to download secret files:
name: Download Secret Files from Secret Manager
run: |
gcloud secrets versions access latest --secret="example_secret_name" > .env
This approach ensures that your application can access its variables from a .env file, which is provided to the image from the secrets during the build phase.
ConfigMaps and Secrets:
ConfigMaps and Secrets within the cluster offer a similar purpose but differ in implementation:
In this method, you create and store secrets directly within the cluster's "configmaps and secrets" section.
Then, in your deployment file, you specify and mount them as volumes to your pods.
For instance, here's a simplified example of how to set and mount secret maps to pods within a deployment file. This enables the pods to read variables from the specified secret stored in the 'configmaps and secret' section of the cluster:
.
.
spec:
containers:
- name: example-container
image: example-image
volumeMounts:
- name: secret-volume
mountPath: /etc/secret
volumes:
- name: secret-volume
secret:
secretName: my-secret
.
.
It's important to choose between the two methods based on how your application is structured and where it expects its variables from. In our case, our application relies on reading variables from a .env file, making Secret Manager a suitable choice for us.
Deployment and Service Files
Creating deployment and service files involves practical steps in defining how your application is deployed and utilizes resources within the cluster.
The files below are typical examples of service and deployment files, respectively, with comments that explain the crucial parts of the files. Let's break down the essential details.
Service File Explanation:
apiVersion: v1
kind: Service
metadata:
name: example-svc # Name of the service
namespace: example-ns # Namespace where the service belongs
spec:
selector:
app: example-select # Label selector for matching pods (Used to communicating with pods)
ports:
- protocol: TCP
port: 80 # Port for incoming traffic
type: ClusterIP # Type of service, internal service communication
Note: While you have the option to use a load balancer to directly expose your app, we opted against this because we require ingress to specify paths. In our scenario, we rely on the ingress controller's load balancer to expose our application. This is why we used "ClusterIP" to expose it internally and then later used ingress resources to expose it externally. We'll discuss this further later in the article.
Deployment File Explanation:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy # Name of the deployment
namespace: example-ns # Namespace where the deployment belongs
labels:
app: example-select # Labels for identifying the deployment
spec:
selector:
matchLabels:
app: example-select # Selector for matching pods
template:
metadata:
labels:
app: example-select # Labels for identifying pods created by this deployment
spec:
containers:
- name: example-container # Name of the container
image: example-artifact_repo_image # Image to be used for the container
ports:
- containerPort: 80 # Port exposed by the container
resources:
limits:
cpu: "1" # Maximum CPU usage allowed
memory: "2Gi" # Maximum memory usage allowed
requests:
cpu: "0.5" # Minimum CPU required
memory: "1Gi" # Minimum memory required
imagePullPolicy: Always # Policy for pulling container image
serviceAccountName: default # Service account associated with the pod
We can use "kubectl" for deploying the above files. You can use the following command to apply your deployment and service files:
kubectl apply -f /path/to/name_of_file
Note: GitHub actions are primarily used for CI/CD (Continuous Integration/Continuous Deployment) purposes. However, you can still use your command line to set up everything in this step.
GitHub Pipelines Setup
Setting up GitHub pipelines involves creating a YAML file known as a "GitHub Actions workflow" within the .github folder. This file defines instructions that GitHub executes whenever changes are pushed to the remote branch. Let's explore an example workflow and break down each step. Below is a GitHub Pipelines Setup with explanations as comments within the file:
# Define the name of the workflow
name: GCP-Deploy
# Define when the workflow should run
on:
push:
branches: [example-branch] # Run when changes are pushed to 'example-branch'
# Define the jobs to be executed
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/example-branch' # Confirms if branch is 'example-branch', if not cancel
# Set up environment variables to streamline configuration
# These environment variables avoid repetition and simplify configuration
env:
PROJECT_ID: your_project_ID
GKE_CLUSTER: your_cluster
GKE_ZONE: your_hosted_zone
GCLOUD_VERSION: your_version
REPOSITORY_URL: us-west2-docker.pkg.dev/$PROJECT_ID/your_artifact_repo_name
DEPLOYMENT_NAME: your_deployment_name
IMAGE: your_image_name
REPO_NAME: your_artifact_repo_name
# Specify each action to be taken
steps:
- uses: actions/checkout@v4
# Authenticate to Google Cloud using the provided service account key
# You need to ensure that you've stored your service account key in the GitHub secrets under the name "GOOGLE_CREDENTIALS" or your desired name and call it accordingly. This step is crucial as it allows your file to be authenticated to Google Cloud Platform (GCP) using the service account
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v2
id: 'auth'
with:
credentials_json: ${{ secrets.GOOGLE_CREDENTIALS }}
# Set up Cloud SDK for GCP operations
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v1
with:
project_id: ${{ env.PROJECT_ID }}
# As we discussed earlier, this step is responsible for fetching your secrets from the created secret and saving them in the .env file. This ensures that your application's secrets are bundled together with your app's image during packaging
- name: Download Secret Files from Secret Manager
run: |
gcloud secrets versions access latest --secret="example_secret_name" > .env
# This command allows Docker to authenticate with Google Cloud Platform (GCP) without requiring user interaction. The "--quiet" at the end ensures that the process doesn't interrupt the workflow, allowing it to proceed smoothly past this step.
- name: 'Docker auth'
run: |-
gcloud auth configure-docker us-west2-docker.pkg.dev --quiet
# Build Docker image and tag it with GitHub SHA for uniqueness
- name: Build
id: build
run: |-
docker build \
--tag "${{ env.REPOSITORY_URL }}/$IMAGE:$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" \
--build-arg app=your_app_name \
.
# Push the built image to the artifact repository
- name: Publish
run: |-
docker push "${{ steps.build.outputs.image_name }}"
# This step entails assigning the newly generated image name, obtained from the preceding step, to the container section of the deployment file, where the image name is expected. It's important to note that the current placeholder in the file will be replaced with the actual image name generated during the build phase. This ensures that the deployment file accurately reflects the correct image name, enabling seamless deployment of the application
- name: Update Deployment File
run: |
sed -i "s~image: .*~image: '${{ steps.build.outputs.image_name }}'~" ./path/to/your_deployment.yaml
# Install necessary plugins for GKE operations
- name: Install gke-gcloud-auth-plugin
run: |
gcloud components install gke-gcloud-auth-plugin
gcloud components install kubectl
# Check kubectl version
- name: Check kubectl version
run: kubectl version --client # Check kubectl version (optional)
# Configure kubectl to work with the specified GKE cluster
- name: Configure kubectl
run: |
gcloud container clusters get-credentials $GKE_CLUSTER --zone $GKE_ZONE --project $PROJECT_ID
# Apply deployment YAML file and check deployment status
- name: Deploy
run: |-
kubectl apply -f ./path/to/your_deployment.yaml --namespace=example-ns
kubectl rollout status deployment/your_deployemnt_name --namespace=example-ns
kubectl get services -o wide --namespace=example-ns
Note: A namespace is attached to the deployments, we'll discuss its essence in the next step
Namespace Creation
Understanding namespaces is fundamental in managing resources within your cluster. Think of namespaces as organizational units that help group related resources together. By creating a namespace, such as "dev" for your development environment, you establish a boundary where resources can interact and communicate exclusively within that namespace.
For instance, consider a scenario where you have various components like Deployments, Services, Secrets, and Ingress resources that collaborate within a specific environment. Placing them within the same namespace ensures effective communication and coordination among these components. However, resources deployed in different namespaces cannot communicate with each other by default, unless they are specifically designed to do so across namespaces.
The primary purpose of namespaces is to organize and isolate resources within distinct environments within the cluster. This segregation enhances manageability and reduces the risk of conflicts or unintended interactions between resources.
To create a namespace, simply execute the following command within your cluster:
kubectl create ns desired_namespace_name
And that's all there is to it!
Ingress Controller Configuration
In our setup, we use ingress resources to define paths for our applications. However, these resources alone cannot expose our applications to the outside world. To achieve external exposure, we require a load balancer with an external IP address. This is where the ingress controller plays a crucial role.
When deployed within the cluster, the ingress controller acts as a traffic manager. It interacts with all the ingress resources in the cluster to serve as a load balancer, directing traffic based on the configurations specified in each ingress resource. Upon deployment, the ingress controller is assigned its own external IP address, which the ingress resources connect to for routing traffic to specified services.
It's recommended to deploy ingress controllers within their dedicated namespace, where they can communicate seamlessly across all namespaces within the cluster. However, if desired, you can also deploy them within custom namespaces.
To deploy an ingress controller to your cluster, follow the link provided below:
NGINX Ingress Controller Installation Guide
Once the deployment is successful and the ingress controller obtains an external IP address, you can proceed with creating your ingress resources. These resources will utilize the ingress controller to manage external traffic routing to your services.
Ingress Resource Deployment
In previous discussions about ingress controllers, we've covered a lot about routing traffic within Kubernetes clusters. Now, let's delve into the actual implementation of an Ingress resource, which plays a crucial role in directing traffic to various services within the cluster. Below is a detailed example of an Ingress resource that effectively routes traffic to two services named example-svc-1 and example-svc-2:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata: # Metadata contains information about the Ingress resource, such as its name and annotations.
name: example-ingress
namespace: example-ns
annotations: # Annotations are key-value pairs used to configure specific behaviors of the Ingress controller.
kubernetes.io/ingress.class: "nginx" # Specifies the Ingress class to use for routing traffic.
nginx.ingress.kubernetes.io/rewrite-target: / # Defines the target URL for rewriting requests to the root path ("/").
spec: # This section defines the rules for routing traffic.
rules: # Rules specify how incoming requests should be routed based on the specified host and path.
- host: example.com # Defines the host header that the Ingress will match against.
http: # Specifies that the rule is for HTTP traffic.
paths: # Defines different paths and their corresponding backends.
- path: /service1 # # Routes requests with the path prefix "/service1" to example-svc-1.
pathType: Prefix # Specifies the type of path matching.
backend:
service: # Specifies the backend service to route traffic to.
name: example-svc-1 # Specifies the name of the Kubernetes service to route traffic to.
port:
number: 80 # Specifies the port number of the backend service to send traffic to.
# Routes requests with the path prefix "/service2" to example-svc-2.
- path: /service2
pathType: Prefix
backend:
service:
name: example-svc-2
port:
number: 80
The provided Ingress resource is a standard configuration deployed within the "example-ns" namespace. Regardless of the namespace in which the Ingress resource is deployed, it maintains its functionality. The Ingress resource relies on the external IP address of the Ingress controller to effectively route traffic from the specified domain to the defined paths within the resources.
SSL Certificate Generation
Ensuring your domain's security is crucial after deployment to prevent users from encountering the "not secure" warning when accessing your URL from a browser. SSL certificates play a vital role in achieving this security. If you're hosting your domain on Google Cloud Platform (GCP), you have the option to generate a Google-issued SSL certificate easily. However, if your domain is hosted elsewhere, you'll need an alternative validation method.
In our case, we opted for Let's Encrypt, a popular choice known for its simplicity and reliability. There are two primary methods I encountered for generating SSL certificates with Let's Encrypt.
Using cert-manager:
Cert-manager is a Kubernetes-native certificate management controller. To set up cert-manager, you deploy it to your Kubernetes cluster, preferably in its own namespace as recommended in the documentation. Once cert-manager is deployed and running correctly, you can proceed to deploy your Ingress resources while referencing cert-manager for SSL certificate management. For detailed installation instructions, refer to the Cert-manager documentation.
Using kcert:
Kcert is another method for generating SSL certificates within a Kubernetes environment. Similar to cert-manager, you deploy kcert to your Kubernetes cluster, typically in its own namespace. Once kcert is successfully deployed and operational, you integrate it with your Ingress resources for SSL certificate management. Detailed setup instructions for kcert can be found on the Kcert GitHub repository.
Conclusion
In my grand adventure of migrating from AWS to GCP, I've journeyed through the realms of databases, secret management, deployment setups, and even SSL certificates! Each step, from setting up Artifact Registry to configuring Ingress resources, has been a thrilling quest toward a cloud-native future. Now armed with knowledge and a touch of magic, I eagerly anticipate encountering even more thrilling challenges to conquer and add to our ever-expanding cloud experience. With each deployment, we continue to grow, evolve, and embrace the excitement of our cloud journey!
Subscribe to my newsletter
Read articles from Ugochukwu Charles Eugene directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Ugochukwu Charles Eugene
Ugochukwu Charles Eugene
DevOps engineer, footballer, Nigerian, and technical writer with a passion for leveraging technology to drive innovation and sharing knowledge through engaging content