# How to Build and Run a Simple Node.js API on Rocky Linux: A Starter Guide for DevOps

Table of contents
- Why Node.js and Rocky Linux? (The Dynamic Duo)
- Step 1: Laying the Foundation (Project Structure)
- Step 2: Crafting the API (The Code)
- Step 3: Gathering the Tools (Environment and Dependencies)
- Step 4: Starting the Engine (Running the API)
- Step 5: Making it a Tireless Worker (Running as a Service with systemd)
- Step 6: Leveling Up Your DevOps Skills (Next Steps)
- Conclusion: Your API is Now a Reliable Worker!
Are you a budding DevOps engineer or someone just dipping your toes into the world of API management and Linux deployment? This guide will walk you through building, structuring, and running a basic yet illustrative Node.js REST API on the sturdy foundation of Rocky Linux. We'll even go a step further and configure it to run as a service, the unsung hero of reliable background processes.
Why Node.js and Rocky Linux? (The Dynamic Duo)
Think of Node.js as a nimble messenger for your web applications. It's fast, efficient, and uses JavaScript, a language familiar to many. Rocky Linux, on the other hand, is like a dependable fortress – stable, secure, and built for serious work, making it an ideal training ground for DevOps practices and real-world deployments.
Step 1: Laying the Foundation (Project Structure)
Before we build our API, let's organize our tools. Imagine you're setting up a workshop: you wouldn't just throw everything in a pile! A well-organized project structure keeps things tidy and manageable:
api_mgmt/
├── server.js # The main control panel
├── package.json # List of tools and how to start
├── .env # Secret blueprints (environment variables)
├── .gitignore # Things to keep private from the blueprint tracker
├── api_mgmt.service # Instructions for our tireless worker (service)
├── controllers/ # The skilled workers handling specific tasks
│ └── healthController.js # A worker checking the system's pulse
├── routes/ # The delivery routes for requests
│ └── items.js # Routes for managing our inventory
├── models/ # Blueprints for our data (not used in this simple example)
└── README.md # The workshop manual
Step 2: Crafting the API (The Code)
Now, let's build the core components of our API. Think of it as assembling different parts of a machine.
server.js (The Control Panel)
This is the brain of our operation, setting up the basic communication lines (Express), logging activities (Morgan), allowing cross-origin chatter (CORS), understanding incoming messages (JSON), and directing traffic (routes).
const express = require('express');
const morgan = require('morgan');
const cors = require('cors');
const healthController = require('./controllers/healthController');
const itemsRouter = require('./routes/items');
const app = express();
const PORT = process.env.PORT || 3000; // The default communication channel
app.use(morgan('dev')); // Log every request for debugging
app.use(cors()); // Allow requests from different web addresses
app.use(express.json()); // Understand incoming data in JSON format
app.get('/api/health', healthController.health); // A route to check if the API is alive
app.use('/api/items', itemsRouter); // Routes for managing our items
app.listen(PORT, () => {
console.log(`API server running on http://localhost:${PORT}`);
});
controllers/healthController.js (The System Checker)
This is a simple worker whose only job is to report if the API is healthy.
exports.health = (req, res) => {
res.json({ status: 'ok', message: 'API is running' });
};
routes/items.js (The Inventory Manager)
These are the delivery routes for managing our "items" – creating, reading, updating, and deleting them (CRUD).
const express = require('express');
const router = express.Router();
let items = []; // Our in-memory inventory (for simplicity)
router.get('/', (req, res) => res.json(items)); // Get all items
router.post('/', (req, res) => { // Add a new item
const item = req.body;
items.push(item);
res.status(201).json(item); // Report successful creation
});
router.put('/:id', (req, res) => { // Update an existing item
const id = parseInt(req.params.id);
const index = items.findIndex((item) => item.id === id);
if (index === -1) return res.status(404).json({ error: 'Item not found' });
items[index] = req.body;
res.json(items[index]);
});
router.delete('/:id', (req, res) => { // Remove an item
const id = parseInt(req.params.id);
items = items.filter((item) => item.id !== id);
res.status(204).send(); // Report successful deletion (no content)
});
module.exports = router;
Step 3: Gathering the Tools (Environment and Dependencies)
Our API needs some tools to run. We'll define them in package.json
and configure some settings in .env
.
.env (The Secret Blueprints)
This file holds configuration that might change depending on where our API is running (e.g., development, production).
PORT=3000 # The port our API will listen on
package.json (The Toolbox Inventory)
This file lists the external libraries our API needs (like Express, Morgan, and CORS) and defines how to start our application.
"name": "api_mgmt",
"version": "1.0.0",
"description": "Simple Node.js API",
"main": "server.js",
"scripts": {
"start": "node server.js" // The command to start our API
},
"author": "Your Name",
"license": "ISC",
"dependencies": {
"express": "^5.1.0",
"morgan": "^1.10.0",
"cors": "^2.8.5"
}
}
Now, let's gather these tools using npm (Node Package Manager):
npm install
Step 4: Starting the Engine (Running the API)
Let's fire up our API manually to see if it works:
npm start
You should see a message like API server running on
http://localhost:3000
. Now, let's test if our "health check" worker is responding:
curl http://localhost:3000/api/health
You should get a JSON response: {"status":"ok","message":"API is running"}
.
Step 5: Making it a Tireless Worker (Running as a Service with systemd)
Running our API with npm start
is fine for development, but for a more robust and production-like setup, we want it to run as a service. Think of a service as a dedicated, always-on worker managed by the system itself. On Linux, systemd
is the foreman that manages these workers.
1. Create the Service Definition (api_mgmt.service):
This file contains the instructions for systemd
on how to manage our API. Create it in /etc/systemd/system/
:
Bash
sudo nano /etc/systemd/system/api_mgmt.service
Paste the following, adjusting the User
, WorkingDirectory
, ExecStart
, and EnvironmentFile
to match your setup:
Ini, TOML
[Unit]
Description=Node.js API Management Server
After=network.target
[Service]
Type=simple
User=mrdevopsguy # 🔍 Replace with your actual user on the system
WorkingDirectory=/home/mrdevopsguy/DevOpsProjects/Projects/api_mgmt # 🔍 Replace with the actual path to your project
ExecStart=/usr/bin/node server.js # 🔍 Replace if your Node.js executable is in a different location
Restart=on-failure
EnvironmentFile=/etc/api_mgmt.env # 🔍 We'll set this up next for environment variables
[Install]
WantedBy=multi-user.target
2. Prepare the Environment File:
We'll copy our .env
file to /etc/
and set appropriate permissions so our service can read the port configuration.
sudo cp .env /etc/api_mgmt.env
sudo chmod 644 /etc/api_mgmt.env
3. Tell systemd about our new worker and start it:
sudo cp api_mgmt.service /etc/systemd/system/
sudo systemctl daemon-reload # Reload systemd to recognize the new service file
sudo systemctl enable --now api_mgmt # Enable the service to start on boot and start it now
sudo systemctl status api_mgmt # Check if our worker started successfully
If all goes well, the output of the last command should show the service as active (running)
.
Step 6: Leveling Up Your DevOps Skills (Next Steps)
Running a basic API as a service is just the beginning! To further enhance your DevOps practice, consider exploring these areas:
Security: Implement authentication (like JWT or API keys) to protect your API.
Monitoring: Integrate logging and monitoring tools (like Prometheus and Grafana) to track your API's performance and health.
Documentation: Add OpenAPI (Swagger) documentation to make your API easy for others to use.
Containerization: Package your API in a Docker container for consistent deployments across different environments.
Automation: Set up CI/CD (Continuous Integration/Continuous Delivery) pipelines to automate the build, test, and deployment process.
Conclusion: Your API is Now a Reliable Worker!
Congratulations! You've not only built a simple Node.js API on Rocky Linux but also learned how to run it as a reliable background service managed by systemd
. This is a fundamental skill in the DevOps world, allowing you to build and deploy applications that are robust and easily managed. Keep experimenting and building – the world of APIs and DevOps is vast and exciting!
Subscribe to my newsletter
Read articles from Santosh Nc directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Santosh Nc
Santosh Nc
I believe, "Hard work beats Talent when Talent doesn't work hard". A Technophile specialising in DevOps.Currently Employed at DevOps Engineer. Shaping my career with Jenkins,Docker, Automation,Poweshell, Python and other devops tools.