How I Chained MySQL and Node.js with systemd and Pulumi — A Real-World AWS Deployment Story


When you’re deploying applications across multiple servers in production, timing is everything. You don’t want your Node.js app to go live before its database is ready. You don’t want to SSH into boxes to start services manually. And most of all, you don’t want brittle infrastructure that breaks when rebooted.
I recently faced this challenge while building a simple multi-server setup on AWS: a MySQL database on one EC2 instance and a Node.js application on another. The catch? The app can’t start unless the database is up and reachable. I wanted this to be fully automated, resilient, and maintainable.
Enter: systemd, Pulumi, and a little shell scripting magic.
The Problem: Ordering Matters
Imagine you’ve got:
A MySQL server in a private subnet.
A Node.js app server in a public subnet.
The app connects to MySQL at startup.
If MySQL isn’t ready, the app crashes. Worse, systemd doesn’t know that the database is external or that it needs to wait.
I needed a way to:
Make Node.js wait for MySQL.
Set this up automatically on new servers.
Keep everything cloud-native and infrastructure-as-code friendly.
Flow Diagram
The Tools: AWS + Pulumi + systemd
Here’s what I used:
AWS EC2: Two instances in separate subnets (private for DB, public for app).
Pulumi: My weapon of choice for Infrastructure as Code (Python flavor).
systemd: To manage services, dependencies, and boot-time automation.
A shell script: To check if MySQL is up before Node.js starts.
Let’s break it down.
Building the Infrastructure with Pulumi
I created a Pulumi project that provisions:
A VPC with public and private subnets.
An internet gateway, NAT gateway, and routing.
Security groups: one for the MySQL instance, one for the Node.js server.
Two EC2 instances: one for the DB, one for the app.
Pulumi handles outputting the IPs and even auto-generates my SSH config to tunnel into the private DB instance via the public Node.js host. Sweet.
The MySQL Health Check Script
Here’s the magic trick: before starting the Node.js app, I run a shell script that checks if the MySQL port (3306) is reachable.
#!/bin/bash
DB_HOST="$DB_PRIVATE_IP"
DB_PORT=3306
MAX_RETRIES=30
RETRY_INTERVAL=10
check_mysql() {
nc -z "$DB_HOST" "$DB_PORT"
return $?
}
retry_count=0
while [ $retry_count -lt $MAX_RETRIES ]; do
if check_mysql; then
echo "Successfully connected to MySQL at $DB_HOST:$DB_PORT"
exit 0
fi
echo "Retry $((retry_count + 1))/$MAX_RETRIES..."
sleep $RETRY_INTERVAL
retry_count=$((retry_count + 1))
done
echo "Failed to connect to MySQL"
exit 1
This is packaged into a mysql-check.service
that runs on boot, thanks to systemd.
Chaining Services with systemd
Here's the best part: I chained the nodejs-app.service
to wait for mysql-check.service
.
[Unit]
Description=Node.js Application
After=mysql-check.service
Requires=mysql-check.service
This ensures that the Node.js app won’t even attempt to start until the MySQL check passes.
No more race conditions. No more silent failures.
The Node.js App
The app is super simple — an Express server that connects to MySQL using a connection pool:
const pool = mysql.createPool({
host: '<DB_PRIVATE_IP>',
user: 'app_user',
password: 'your_secure_password',
database: 'app_db'
});
On a GET request to /
, it checks if it can execute a basic query. If yes, it returns a success message.
Deploying It All
The entire process looks like this:
Run
pulumi up
.Infrastructure is created.
EC2 instances are initialized with user data (Node.js install, script copying, env vars).
MySQL service is set up and configured to allow remote access.
Node.js instance runs the MySQL check, waits, then starts the app.
I even set up system users, directory permissions, and journal logging for everything — so it’s secure and observable.
Why This Pattern Rocks
Idempotent: Reboots don’t break anything.
Automated: Everything spins up from code.
Resilient: Node.js only starts when the DB is ready.
Transparent: systemd logs make debugging easy.
This pattern is perfect for any microservice or distributed architecture where service dependencies matter — especially if those services are on different machines.
Summary
If you’ve ever felt the pain of managing service dependencies in cloud environments, give this pattern a try. It’s reliable, clean, and fully automatable. Whether you're scaling to production or just learning, chaining services with systemd and checking health with Bash is an incredibly powerful combo.
Got questions? Want the full Pulumi code? I’m happy to share.
Subscribe to my newsletter
Read articles from Md Saif Zaman directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
