From Code to Cart: Deploying an E-Commerce Application on Kubernetes

Table of contents

Every great application begins small. In our last article, we learned how to deploy locally, using Docker Compose to manage our e-commerce stack. However, a successful business can't operate from a laptop. Now, we're ready for the real challenge: launching our e-commerce platform live.
Instead of using a managed Kubernetes service like GKE, we'll use kOps. This open-source tool lets us manage our Kubernetes cluster, giving us more control and understanding. kOps simplifies setting up a production-grade cluster on Google Cloud, automating virtual machines, networking, and firewalls.
Using kOps is a great learning opportunity. It helps us understand how to build a Kubernetes cluster from scratch, which is valuable for future projects.
Let's start deploying a production-ready e-commerce store with kOps on Google Cloud.
Creating Our Cluster Using kOps
kOps
is the main command-line tool we'll use in this guide, so it's important to have it installed and working correctly. While gcloud
and kubectl
are already set up, kOps
needs a quick download.
Please follow the official kOps
installation instructions to prepare the tool in your Cloud Shell environment. You can find them at this link:
https://kops.sigs.k8s.io/install/
Before creating our cluster, we need to ensure our tools are properly authenticated. Although Cloud Shell is pre-configured for gcloud
and kubectl
, it's a good practice to explicitly set up Application Default Credentials. This ensures that kOps
and other tools have the permissions needed to create and manage resources for us.
$ gcloud auth application-default login
This command will set up the credentials that kOps
will use to interact with the Google Cloud APIs. Once you see the "Credentials saved to file" confirmation, you’re ready to continue.
With kOps
authenticated, the first important step is to create a state store. This is a Google Cloud Storage bucket that kOps
will use to store the configuration and state of our Kubernetes cluster. Think of it as a central source of truth and a simple database for our infrastructure.
To create this bucket, we'll use the gsutil
command, a command-line tool for Google Cloud Storage that is already installed in Cloud Shell.
First, let's set a few environment variables to make our commands cleaner and less likely to have errors. Remember to choose a globally unique bucket name.
$ export KOPS_STATE_STORE=gs://<your-unique-kops-state-bucket>
$ export CLUSTER_NAME=e-commerce.k8s.local
$ gsutil mb $KOPS_STATE_STORE
After the bucket is created, we'll configure kOps
to use it by default. This saves us from having to type the --state
flag for every command.
$ echo "kops_state_store: ${KOPS_STATE_STORE}" >> ~/.kops.yaml
Defining the Kubernetes Cluster
Now we reach the main part. The kops create cluster
command sets up the blueprint for our Kubernetes cluster using our chosen name and parameters. It creates a configuration file saved in our Google Cloud Storage bucket.
This command is declarative—it specifies what we want but does not create resources immediately. This allows us to review and adjust the configuration before setting up the infrastructure.
To keep it simple and get our e-commerce app running quickly, we'll create a basic two-node cluster: one for the control plane and one for the worker.
$ kops create cluster \
--name=${CLUSTER_NAME} \
--zones=us-central1-a \
--node-count=1 \
--node-size=e2-medium \
--master-size=e2-medium \
--cloud=gce \
--kubernetes-version=1.32.4 \
--ssh-public-key=~/.ssh/id_rsa.pub
Provisioning the Infrastructure
With our cluster configuration saved, we'll use the kops update cluster
command to set up resources on Google Cloud. This creates virtual machines, configures the network, sets up firewalls, and runs Kubernetes components. The --yes
flag confirms the changes; without it, kops
only shows a plan.
$ kops update cluster --name ${CLUSTER_NAME} --yes
Once the kops update cluster
command is done, our infrastructure is set up, but it takes a few more minutes for the Kubernetes control plane and nodes to stabilize. This is when we verify everything.
$ kops validate cluster --name ${CLUSTER_NAME} --wait 10m
The --wait 10m
flag allows kOps
to try for up to 10 minutes, giving the cluster time to be ready. We'll see a success message like "The cluster is ready."
After that, we use kubectl
to verify if our nodes are running, ensuring our local kubectl
is correctly set up and can connect to the cluster.
$ kubectl get nodes
We might encounter an error here, a common issue after setting up the cluster.
When running kubectl get nodes
, the cluster's API might seem unreachable. This occurs because our local kubectl
isn't set up to authenticate with the cluster yet.
To resolve this, use kops export kubecfg
with the --admin
flag to generate the correct configuration file for full administrative access.
$ kops export kubecfg --name ${CLUSTER_NAME} --admin
This command updates our local ~/.kube/config
file with the necessary credentials, using Google Cloud authentication to securely connect to the Kubernetes API server for cluster management.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-plane-us-central1-a-xhkq Ready control-plane 9d v1.32.4
nodes-us-central1-a-vl30 Ready node 9d v1.32.4
Bringing Our Application to Life
With our Kubernetes cluster now validated and ready, we have a fully operational environment. It's time to take the next important step and bring our application to life by deploying the necessary Kubernetes objects for our sample e-commerce store.
To launch our e-commerce application on Kubernetes, we'll define the necessary objects. We'll start with two Services: one for the application's data layer and a headless Service for the database. The headless Service and StatefulSet ensure each database instance has a stable network identity and storage, crucial for data integrity and replication.
For configuration, we'll create two ConfigMaps: one for the database initialization script for the StatefulSet, and another for application settings for its Deployment. Sensitive data will be handled by two Secrets, one for the database StatefulSet and another for the application Deployment. The main components will be managed by an application Deployment and a database StatefulSet.
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
- protocol: TCP
port: 8080
targetPort: 80
type: LoadBalancer
A Service
named app-service
with type: LoadBalancer
makes the application accessible externally. It sets up a Google Cloud Load Balancer with a public IP. The selector
matches the app: app
label to connect to the application's Pods. Incoming TCP traffic on port 8080
is directed to port 80
of the Pods, ensuring public access to our e-commerce front-end.
$ kubectl get svc,secrets,configmaps
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app-service LoadBalancer 100.67.75.141 34.31.226.55 8080:31822/TCP 5h27m
service/db ClusterIP None <none> 3306/TCP 5h27m
service/kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 9d
NAME TYPE DATA AGE
secret/app-secret Opaque 2 5h27m
secret/db-secret Opaque 3 5h27m
secret/regcred Opaque 1 5h28m
NAME DATA AGE
configmap/app-config 2 5h27m
configmap/db-init-script 1 5h27m
configmap/kube-root-ca.crt 1 9d
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DB_HOST: db # headless service name for the database
DB_NAME: ecomdb
A ConfigMap
named app-config
stores non-confidential configuration data as key-value pairs, separating it from the application's code and Pod definition. It includes DB_HOST
set to db
and DB_NAME
set to ecomdb
. These values become environment variables in the application's Pod, helping it connect to the database easily.
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
DB_USER: ZWNvbXVzZXI= # Base64 encoded "ecomuser"
DB_PASSWORD: ZWNvbXBhc3N3b3Jk # Base64 encoded "ecompassword"
A Secret
named app-secret
securely manages sensitive data for the application, like passwords and API keys, using Base64 encoding. It stores DB_USER
and DB_PASSWORD
for the database, allowing the application to access credentials safely without embedding them in the code. Kubernetes decodes these values automatically for use in the application's Pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: app
image: ghcr.io/kibablu/learning-app-ecommerce:master
ports:
- containerPort: 80
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DB_HOST
- name: DB_USER
valueFrom:
secretKeyRef:
name: app-secret
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: DB_PASSWORD
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: app-config
key: DB_NAME
The application's Deployment
is set up to maintain one running replica using an image from a private registry. We could also use a publicly available image from a registry like Docker Hub, which would remove the need for the imagePullSecrets
section. It uses the regcred
secret for image access and listens on port 80
. Database connection settings are fetched from the app-config
ConfigMap
and app-secret
Secret
, providing the necessary details for database connectivity.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-deployment-f7c654c6-rh6ps 1/1 Running 0 5h26m
db-statefulset-0 1/1 Running 0 5h26m
apiVersion: v1
kind: Service
metadata:
name: db # used by the StatefulSet and the app's DB_HOST
spec:
selector:
app: db
ports:
- protocol: TCP
port: 3306
targetPort: 3306
clusterIP: None
A Service
is configured for the database as a Headless Service by setting clusterIP: None
. This allows direct connections to individual Pods instead of using a single load-balanced IP. The application uses the Service name db
as the database hostname. The selector
connects the service to the database Pods with the app: db
label, ensuring the application can reliably communicate with the correct database instance.
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
MYSQL_ROOT_PASSWORD: dmVyeXNlY3Jvb3RwYXNzd29yZA== # Base64 encoded "verysecretrootpassword"
MYSQL_USER: ZWNvbXVzZXI= # Base64 encoded "ecomuser"
MYSQL_PASSWORD: ZWNvbXBhc3N3b3Jk # Base64 encoded "ecompassword"
A Secret
named db-secret
securely stores the database credentials, including the root password and the e-commerce application's username and password, all encoded in Base64. This Secret
is used with the database StatefulSet
to initialize the database container with these credentials at startup.
apiVersion: v1
kind: ConfigMap
metadata:
name: db-init-script
data:
init.sql: | # content of our db-load-script.sql
USE ecomdb;
CREATE TABLE products (id mediumint(8) unsigned NOT NULL auto_increment,Name varchar(255) default NULL,Price varchar(255) default NULL, ImageUrl varchar(255) default NULL,PRIMARY KEY (id)) AUTO_INCREMENT=1;
INSERT INTO products (Name,Price,ImageUrl) VALUES ("Laptop","100","c-1.png"),("Drone","200","c-2.png"),("VR","300","c-3.png"),("Tablet","50","c-5.png"),("Watch","90","c-6.png"),("Phone Covers","20","c-7.png"),("Phone","80","c-8.png"),("Laptop","150","c-4.png");
A ConfigMap
named db-init-script
stores the database initialization script. The data
field contains SQL code to create a products
table and insert sample data. This ConfigMap
is mounted into the database StatefulSet
, so the script runs automatically at startup, preparing the database with a schema and sample data.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db-statefulset
spec:
serviceName: db # Must match the Headless Service name
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
volumes:
- name: db-init-volume
configMap:
name: db-init-script
containers:
- name: db
image: mariadb:10.6
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_DATABASE
value: ecomdb
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: MYSQL_PASSWORD
volumeMounts:
- name: db-data # matches the volumeClaimTemplates name
mountPath: /var/lib/mysql
- name: db-init-volume # Mount the ConfigMap volume
mountPath: /docker-entrypoint-initdb.d/ # Crucial path for MariaDB init scripts
readOnly: true
volumeClaimTemplates: # handles persistent storage for the database
- metadata:
name: db-data # corresponds to the volumeMounts name for persistent data
spec:
accessModes:
- ReadWriteOnce
storageClassName: "balanced-csi"
resources:
requests:
storage: 10Gi
A StatefulSet
manages our database with stable identity and persistent storage. It uses the db
Headless Service for network identity, gets credentials from the db-secret
Secret
, and mounts the db-init-script
ConfigMap
for database setup. The balanced-csi
storageClassName
automatically provisions a persistent disk. The volumeClaimTemplates
section creates a PersistentVolume
to keep data safe even if the Pod restarts.
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
balanced-csi (default) pd.csi.storage.gke.io Delete WaitForFirstConsumer true 9d
ssd-csi pd.csi.storage.gke.io Delete WaitForFirstConsumer true 9d
standard kubernetes.io/gce-pd Delete Immediate false 9d
standard-csi pd.csi.storage.gke.io Delete WaitForFirstConsumer true 9d
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
db-data-db-statefulset-0 Bound pvc-a128846e-caaa-4761-b7de-5ec3d7fc5e93 10Gi RWO balanced-csi <unset> 3d4h
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-a128846e-caaa-4761-b7de-5ec3d7fc5e93 10Gi RWO Delete Bound default/db-data-db-statefulset-0 balanced-csi <unset> 3d4h
Congratulations! We've successfully deployed an e-commerce app on a self-managed Kubernetes cluster using kops
, transitioning from a local docker-compose
setup to a production environment. However, our secrets aren't fully secure with Base64 encoding. In the next article, we'll enhance security and explore advanced topics like using a database Operator and a GitOps workflow for automation. Thank you for following along, and we'll continue building a more robust pipeline in the next article.
Subscribe to my newsletter
Read articles from bablu directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
