MERN on GKE: An End-to-End DevOps Journey with Terraform, Jenkins, GitOps & Prometheus


Deploying a modern, full-stack applications like those built on the MERN stack into the dynamic environment of Kubernetes can often feel like assembling a complex puzzle. In this post, I will be explaining my journey on how I implemented end-to-end DevOps pipeline to deploy a MERN application onto Google Kubernetes Engine (GKE). Rather than being a step-by-step tutorial, this post is a comprehensive documentation of how I DevOpsified a MERN application from scratch following the industry standard practices.
1. Project Overview
Here’s the project architecture diagram that will give you the high level overview of DevOps pipeline for MERN application.
The project is splitted into three repositories, i.e.
ii. Application Code Repository and
iii. GitOps Repository
The separation of the repositories provides easy management and isolation for different teams. Although the application itself is a simple To-Do Web Application, our main goal is to implement the DevOps pipeline for the application not to focus on the functionality of application itself.
Tech Stack
Infrastructure:
Terraform ( IaC tool for provisioning cloud resources )
Cloud Provider ( Google Cloud Platform )
MongoDB Atlas ( Database Service )
Google Kubernetes Engine (Managed Kubernetes Service)
Application:
React (Frontend)
Express/Node (Backend)
MongoDB (Database)
CI/CD:
Jenkins (CI Server)
ArgoCD (GitOps Controller)
Observability:
Prometheus (Monitoring tool or Metrics Collector)
Grafana (Visualization tool)
Loki (Log Aggregation tool)
Promtail (Log Collecting Agent)
AlertManager (Alerts Routing tool)
2. Infrastructure Provisioning with Terraform
When deploying any modern application, the first step is to set up the underlying infrastructure where the application will run. Infrastructure could range from virtual machines and serverless functions to container platforms or even local environments. In my case, since the MERN application is containerized and Kubernetes-native, I chose to deploy it on Google Kubernetes Engine (GKE), Google Cloud’s managed Kubernetes service.
To provision GKE and other supporting resources, I used Terraform, the most popular Infrastructure as Code (IaC) tool. With Terraform, I was able to automate the entire infrastructure setup process, ensuring consistency, reproducibility, and ease of management. Manually creating resources via the GCP console would have been tedious, error-prone, and not scalable in production-like environments, making IaC an essential part in DevOps landscape.
Resources Provisioned:
GKE Cluster
I provisioned a GKE cluster with a public API endpoint and private nodes in a custom VPC. API endpoint was kept public just for testing from local machine, later on I moved to private endpoint that is securely accessed via Bastion host. I also set up a NAT Gateway to enable outbound internet access for the private GKE nodes when necessary.
Additionally:
An IAM Service Account was created for the GKE nodes with required roles such as pulling images from Artifact Registry.
To secure the public API endpoint, I configured firewall rules allowing access only from my specific IP address.
Jenkins CI Server
I provisioned a Jenkins server on a dedicated VM using Terraform to ensure automation and reproducibility of the build environment as well. Let’s go for full automation !!!! I also created an IAM service account with necessary roles for Jenkins and configured firewall rules to allow only controlled access.
Bastion Host
The machine from where we can access our private resources in the cloud. It is the only machine to have access to the private resources. Bastion Host is created on the same VPC as that of GKE cluster because it needs to communicate with private GKE Cluster because later on I am going to make the cluster private. I created the Service Account for Bastion Host and also configured the firewall rules to allow SSH access to bastion host only from my local machine for security.
I built a reusable GKE module, so the same code can easily be adapted for multiple environments (dev, staging, prod) by simply changing the configuration variables in respective *.tfvars
files. To ensure environment isolation, each environment has its own separate directory. For state management:
A remote backend on Google Cloud Storage (GCS) was configured to store Terraform state files.
State locking (automatic with GCS) prevents concurrent modifications to the same state file, avoiding race conditions during simultaneous Terraform operations.
You can find all the terraform code in this repository. You can check that out to see the detailed implementation of what I explained above.
3. Continuous Integration with Jenkins
Once the infrastructure was ready, the next step was to set up the CI/CD pipeline to automate the building, testing, and deployment of the application. While there wasn’t any strict requirement for choosing Jenkins as the CI server for this project, I preferred Jenkins because of its flexibility and extensive plugin ecosystem. Although it is not preferred to use CI Server itself for running the pipelines, I still didn’t configure any worker nodes to save costs in GCP. Here, Jenkins is running two separate pipelines; one for Terraform and another for Application code.
-
First step is to setup the credentials that the Terraform is going to use to provision the resources in GCP. I used the same Service Account as that of Jenkins VM and configured the credentials in Jenkins to use in the pipeline.
One of the interesting challenges here was to make sure the pipeline did not blindly apply or destroy infrastructure without manual approval. To solve this, I used Jenkins’ built-in
input
directive, which pauses the pipeline and requires explicit manual confirmation before proceeding to theterraform apply
orterraform destroy
stages.Parameterized build is configured to select which environment [
dev
,staging
orprod
] and which operation [apply
ordestroy
] to perform. I have also set up the webhook on GitHub for automated pipeline trigger whenever any code change occurs in the repository.Here’s an example of the pipeline waiting for user input before applying infrastructure changes:
This mechanism helps prevent accidental or unintended infrastructure provisioning or destruction.
-
The application pipeline automates the build and deployment process of the MERN application, i.e:
Build Docker images for both the React frontend and Node.js backend.
Push the built images to Google Artifact Registry.
Update the image tags in the
values.yaml
file in the GitOps repository, so that ArgoCD can detect and deploy the new version to GKE.
To keep things simple, I intentionally did not include DevSecOps practices such as:
Trivy (image scanning),
SonarQube (code quality analysis),
OWASP dependency checks, etc.
In this pipeline, the same Service Account used in the Terraform pipeline is used for pushing images to artifact registry. Multi-stage Docker Build is implemented for the application to reduce the image size significantly.
Here’s a glimpse of the application CI/CD pipeline in action:
4. GitOps with ArgoCD
The GitOps repository can be updated by any of the two ways:
When the Application CI pipeline runs, it will update the GitOps repository with the newly built Docker images.
DevOps team can directly modify the deployment manifests while adjusting resource configurations or any other tasks.
I have used ArgoCD( a popular GitOps controller) that automatically syncs the actual cluster state with the declarative manifests present in Git. ArgoCD is deployed in the same cluster where the MERN application is going to be deployed, so it continuously monitors the repository and perform reconciliation. All the setup related instructions after the GKE cluster has been provisioned are documented in this file. Here’s the glimpse of ArgoCD deployment:
I have setup MongoDB database outside the cluster in MongoDB Atlas, following the industry standard practices. It is also necessary to manage the DB secrets properly, since we can’t push them directly in github. So, I’ve used SealedSecrets
controller that encrypts the actual Kubernetes Secrets
and generate a new encrypted file that can be safely pushed on github. The encrypted file called SealedSecrets
can only be decrypted by the SealedSecrets
controller running inside the same cluster to convert back into the actual Kubernetes Secrets
.
GKE ingress controller is automatically deployed and managed by GKE ( because I had enabled http_load_balancing
during the cluster creation) that will route the traffic based on the rules defined in the Ingress
resource. Also, the GKE ingress controller does provisioning of http/s load balancer endpoint where the external users are going to access the application. Also, I have configured my custom domain to serve the application instead of using LoadBalancer DNS name or public IP.
In this way, ArgoCD ensured seamless automation for deploying the latest stable version of MERN application in GKE cluster without any manual intervention.
5. Monitoring with Prometheus, Grafana and Loki
Now, the MERN application is fully functional and working but it might not be always the case. So, to monitor the application and react quickly for any unforeseen situations, I have configured observability stack with;
Prometheus: Metrics collection from nodes and Kubernetes components.
Loki: Log aggregation system.
Promtail: Log shipping agent to collect logs from pods.
Grafana: Visualization of both metrics and logs for easy analysis.
AlertManager: Routing of alerts to email (my Gmail account in this case) based on defined alerting rules.
The main drawback of the application was that it didn’t expose any /metrics
endpoint, so I couldn’t collect any application specific metrics like request_rate, avg. response time, etc. and many more. Right now, only the infrastructure level metrics are scraped by Prometheus using node-exporter
and kube-state-metrics
deployed through kube-pometheus-stack
helm chart from prometheus-community
. Similary, Loki and Promtail are also deployed using their respective Helm Charts from grafana
. Loki is deployed in SingleBinary
mode which is sufficient for small applications like this one.
I have configured variables in Grafana for dynamic dashboards and easy monitoring for different pods, nodes, etc. And, since we can’t expose Grafana or Prometheus for end users publicly [ although Grafana has authentication mechanism ], I have set up SSH port forwarding through Bastion Host to access the Grafana from my local machine. Here’s the glimpse of Grafana dashboard:
Finally, Alerting Rules are configured in Prometheus that will fire up the alerts to AlertManager and the AlertManager routes the alerts to appropriate notification channel based upon configured routing rules. Everything related to configuration can be found in this file. I configured the minimalist alerts without using any custom template which can be seen below:
Final Thoughts
This project was the complete implementation of DevOps pipeline from scratch starting with Terraform for resource provisioning, setting up CI/CD piplines for automated build and tests, implementing GitOps with ArgoCD for declarative CD, configuring the observability stack to monitor the application performance ensuring quick incident response and disaster recovery. Every stage contributed to build an automated, reliable and resilient system. Still there are some practices to implement that I intentionally left out because the application functionality was very simple, like implementing the DevSecOps practices ( Image Scanning, File Scanning, Vulnerability testing, etc. ), enhanced Secrets Management ( using HashiCorp Vault ), advanced logging stack ( ELK, EFK ). Hopefully, these all will be the future implementations in this project or another project.
I hope this post provided enough understanding of how DevOps pipeline is implemented on the application right from scratch. If you want to try out, feel free to check out the repositories of the project linked above. By looking at the architecture diagram above and the code, you should be able to set up the project. Thanks !
Connect with me on:
Subscribe to my newsletter
Read articles from Anjal Poudel directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
