Cloud Metrics Mastery: Deploying Prometheus, Grafana, and Alertmanager with Endpoints on an Ubuntu VPS for Real-Time Monitoring

Introduction

In this project, we’ll set up a powerful monitoring stack using Prometheus, Grafana, and Alertmanager with Endpoints on an Ubuntu VPS. This setup is perfect for DevOps engineers looking to monitor servers, applications, and system health in real time.

Unlike most tutorials that run on a local VM, we’re stepping it up by hosting everything on a VPS (Virtual Private Server) — making our monitoring system accessible over the internet and closer to a real-world deployment scenario. No VS Code is required. Everything is handled via SSH, giving us full control without overloading our system.

Prerequisites

i. A VPS running Ubuntu 20.04+

ii. SSH access and optional XRDP GUI access (to set up XRDP GUI for Linux Server Running Ubuntu refer to my Previous Blog-post on that)

iii. Basic familiarity with terminal commands

iv. Open ports: 9090 (Prometheus), 3000 (Grafana), 9093 (Alertmanager)

v. Internet connection to download tarballs

vi. wget, tar, and systemctl already available on your VPS

Why Are We Binding to 0.0.0.0 Instead of localhost? By default, services like Prometheus and Grafana are bound to localhost, meaning they can only be accessed from the machine itself (i.e., http://localhost:9090).

But since we’re using a VPS (a remote server), we want to access the dashboards from our local browser. That’s where 0.0.0.0 comes in — it tells the service to listen on all available network interfaces, making it accessible via the public IP address of the VPS, like: http://your-vps-public-ip:9090

This is essential for remote access and dashboard visibility. Always ensure your firewall and security groups are configured to allow access to these ports.

Lets Get Started.

STEP 1: Update VPS & Create a Working Directory

SSH into VPS Server command is " ssh your-user@your-vps-ip "

Then update & prepare your server: command to run is "sudo apt update && sudo apt upgrade -y " Keeping your VPS updated is important for security and stability.

Next step is to install essential tools: " sudo apt install wget curl unzip -y "

Finally, create a folder to house all your monitoring tools and cd into the folder so we can work inside it. command is " mkdir -p ~/monitoring-stack && cd ~/monitoring-stack " for the course of this project our Directory will be named "monitoring-stack "

Step 2. Download Prometheus

Run the command " wget https://github.com/prometheus/prometheus/releases/download/v3.4.0/prometheus-3.4.0.linux-amd64.tar.gz "
why are we running this command, this command grabs the latest stable version of Prometheus from GitHub. We're using version 3.4.0 here. You can always check the official site prometheus.io/download to verify.

Next thing is to Extract and rename the folder we just downloaded first command to run is " tar -xvf prometheus-3.4.0.linux-amd64.tar.gz " this command will extract the downloaded file , then next we run the command " mv prometheus-3.4.0.linux-amd64 Prometheus " this command renames the long folder name to just prometheus so it can be easier to work with.

Step 3. Move Prometheus & Promtool Binaries to Global Path.

We will run command “ sudo cp Prometheus/prometheus /usr/local/bin/ “ and also command " sudo cp Prometheus/promtool /usr/local/bin/ " to move Prometheus and Promtool Binaries.

Now by default, these binaries are inside the Prometheus folder, and to run them you will need to type something like ./Prometheus/prometheus every time. So Instead we move them to /usr/local/bin/, which is a system directory that stores global executables. That way, you can just type prometheus from anywhere and it’ll run.

Then We can run this command " Prometheus --version “ to check if we actually have the latest version we downloaded earlier.

Then we will set permissions for both prometheus and promtool, this permissions ensures the system's root user owns these binaries, This is a standard practice for tools installed system-wide. It improves security and consistency. So we run these commands " sudo chown root:root /usr/local/bin/Prometheus " and " sudo chown root:root /usr/local/bin/promtool "

Then we make sure Make sure they are executable (usually they already are, but just to be sure): we run command " sudo chmod +x /usr/local/bin/Prometheus " and " sudo chmod +x /usr/local/bin/promtool "

Step 4. Create and Edit the Prometheus Configuration File to Start with 0.0.0.0

Prometheus needs a config file called prometheus.yml to know what metrics to scrape, from which targets, and how often. We’ll create this file inside the Prometheus folder where Prometheus is living and Update the configuration to set up access to our Prometheus on any host not just inside our VPS only. We ll run command " sudo nano ~/monitoring-stack/Prometheus/prometheus.yml " This opens the Prometheus config file in the nano editor.

Then enter the following configurations.

#-Global config

global:

scrape_interval: 15s

evaluation_interval: 15s

#-Alertmanager config

alerting:

alertmanagers:

  • static_configs:

  • targets:

  • #- alertmanager:9093

  • #- Load rules files

  • rule_files:

  • #- "first_rules.yml"

  • #- "second_rules.yml"

#- Scrape config

scrape_configs:

- job_name: "prometheus"

static_configs:

- targets: ["0.0.0.0:9090"]

labels:

app: "prometheus"

Breakdown of the prometheus.yml Config File.

  1. alerting: Section

    alerting:

    alertmanagers:

    -static_configs:

    -targets:

    #-alertmanager:9093

    What this means: Prometheus can send alerts to Alertmanager — a tool that handles alert notifications (like sending an email, Slack message, etc.). Right now it's commented out with a #, meaning Alertmanager is not yet active. Later, when we install and configure Alertmanager, we'll uncomment this and set the correct IP and port

  2. rule_files: Section

    rule_files:

    #-"first_rules.yml"

    #- "second_rules.yml"

    What this means: This section is for defining alert rules (e.g., “Alert me when CPU usage is > 80%”). Right now it's just placeholders, also commented out. We can later create .yml files with custom alert rules and include them here.

  3. scrape_configs: Section

scrape_configs:

-job_name: "prometheus"

static_configs:

- targets: ["localhost:9090"]

labels:

app: "prometheus"

What's happening here:

job_name: This is just a name — Prometheus uses it as a label for organizing metrics.

targets: These are the endpoints Prometheus will collect metrics from.

localhost:9090 means it's scraping metrics from itself.

labels: Extra tags to help organize the data, here we’re tagging this target with app=prometheus

Upgrade Needed: Change localhost to 0.0.0.0 under the scrape configs section, Since you're running Prometheus on a VPS and want external access (e.g. via browser), we need to update: targets: ["localhost:9090"] to targets: ["0.0.0.0:9090"]

Why?

localhost:9090 only allows access inside the VPS.

0.0.0.0:9090 allows Prometheus to bind to all interfaces, so you can access it externally at

http://your-vps-ip:9090.

We’ll add more targets (endpoints) later. For now, save and exit with: Ctrl +O to Save, Enter and Ctrl + X to exit the nano file.

Please Note : Prometheus works with a handy built-in tool called promtool to validate configuration files.

So we have to test the config file we updated to be sure everything is okay with our codes, so we run command " promtool check config ~/monitoring-stack/Prometheus/prometheus.yml " We should get a Success output that shows everything is good with our commands

Step 5. Create a Systemd Service File.

This lets Prometheus start like a proper daemon, and restart on reboots. We’ll create a prometheus.service file.

Run this command " sudo nano /etc/systemd/system/prometheus.service " it opens up a nano file and we will paste the following inside it .

[Unit]

Description=Prometheus Monitoring

Wants=network-online.target

After=network-online.target

[Service]

User=root

ExecStart=/usr/local/bin/prometheus \
--config.file=/root/monitoring-stack/Prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus/ \
--web.console.templates=/root/monitoring-stack/Prometheus/consoles \ --web.console.libraries=/root/monitoring-stack/Prometheus/console_libraries \ --web.listen-address=0.0.0.0:9090

Restart=always

[Install] WantedBy=multi-user.target

Explanation (Line by Line):

[Unit]: Defines metadata for systemd. We want the network to be online before starting.

[Service]: Tells systemd what to run.

User=root: Run as root (safe here on a personal VPS).

ExecStart: Where Prometheus is installed and what config to use:

--config.file: Your Prometheus config file path.

--storage.tsdb.path: Where to store time series data.

--web.listen-address=0.0.0.0:9090: Opens it to external access — so we can reach it from a browser.

--web.console.*: These are default template paths included in Prometheus.

Restart=always: Ensures Prometheus restarts if it crashes.

[Install]: Makes sure the service can run during boot.

Then save and exit with: ctrl +O to Save, press Enter and Ctrl + X to exit the nano file.

Step 6. Reload system, start Prometheus, and enable it on boot.

For this step , all we have to do is run this commands one by one .

" sudo systemctl daemon-reexec "

" sudo systemctl daemon-reload "

" sudo systemctl start prometheus "

" sudo systemctl enable prometheus "

Explanation:

  1. sudo systemctl daemon-reexec : This ensures systemd itself is reloaded completely. Use this when you create new services

  2. sudo systemctl daemon-reload : This reloads the systemd manager configuration to detect the new Prometheus service.

  3. sudo systemctl start prometheus : Makes Prometheus start automatically on system boot.

Once done, check the status to confirm Prometheus is running smoothly: We will rin this command " sudo systemctl status prometheus "

You should see green active (running) text.

Finally, Since Prometheus is bound to 0.0.0.0:9090 (which means it listens on all IP addresses), open your browser and go to:http://your-vps-public-ip:9090

You should see the Prometheus dashboard open. This is your metrics playground where you can run queries, see targets, and later add Grafana + Alertmanager too.

Step 7. Create a system user for Alertmanager

Before we start running commands we have to get out of the directory that holds the Prometheus configurations named monitoring-stack with command " cd .. " Then run " sudo useradd --no-create-home --shell /bin/false alertmanager "

Why are we doing this?

We’re creating a dedicated Linux system user called alertmanager to securely run the Alertmanager service. This is a best practice in DevOps to isolate services and limit their permissions, improving system security.

Here’s what each flag means:

sudo: Gives us administrative privileges.

useradd: Command to create a new user.

--no-create-home: This tells the system not to create a home directory for the user because Alertmanager doesn't need one.

--shell /bin/false: This sets the shell to /bin/false so the user can’t log in interactively — again, for security.

alertmanager: The name of the new system user we’re creating.

This user will later be assigned ownership of the Alertmanager binaries and configuration files so that only it can manage or run them

Next is to create a directory for alertmanager with command " sudo mkdir /etc/alertmanager " Then run command " cd /tmp/ " to switch into the temporary folder to download and unpack Alertmanager.

Next thing we have to do is download alertmanager , run this command . " wgethttps://github.com/prometheus/alertmanager/releases/download/v0.28.1/alertmanager-0.28.1.linux-amd64.tar.gz "

This command uses wget to download the tarball (compressed file) of Alertmanager directly from the official Prometheus GitHub release page

Extract the file with command " tar -xvf alertmanager-0.28.1.linux-amd64.tar.gz "

This above command unpacks the contents of the .tar.gz file, giving us access to the actual Alertmanager binary (alertmanager) and its tool (amtool), along with default config files.

Then next thing is run command " cd alertmanager-0.28.1.linux-amd64/ "

After we cd into that directory , we move the Alertmanager binaries to /usr/local/bin , we will run this commands " sudo mv alertmanager /usr/local/bin/ " and also run immediately " sudo mv amtool /usr/local/bin/ "

Then we set ownership and restrict permissions with command " sudo chown alertmanager:alertmanager /usr/local/bin/alertmanager " and " sudo chown alertmanager:alertmanager /usr/local/bin/amtool "

Please note that with these commands above , We're assigning ownership of both binaries to the alertmanager system user we created earlier. This improves security by limiting who can execute or modify these binaries.

Then Move the configuration file into the /etc/alertmanager directory: we will run the command " sudo mv alertmanager.yml /etc/alertmanager/ "

Set the ownership of the /etc/alertmanager directory: Run command " sudo chown -R alertmanager:alertmanager /etc/alertmanager/ "

This makes sure only the alertmanager service user has control over its own configs, a key part of hardening your server setup.

Step 8. Create a system Service File for Alertmanager

We're setting up Alertmanager to run as a background service (daemon) on your system. This makes it easy to start, stop, enable on boot, and manage like other system services. We will run this command " sudo nano /etc/systemd/system/alertmanager.service " Once the nano file opens , paste the following configs in it.

[Unit]

Description=Alertmanager Service

Wants=network-online.target

After=network-online.target

[Service]

User=alertmanager

Group=alertmanager

Type=simple

ExecStart=/usr/local/bin/alertmanager \ --config.file=/etc/alertmanager/alertmanager.yml \ --storage.path=/var/lib/alertmanager

[Install]

WantedBy=multi-user.target

Configuration Explanation:

[Unit] Defines when this service should start (after network is up).

[Service] - Runs the alertmanager binary as the alertmanager user.

  • Loads the config file from /etc/alertmanager/alertmanager.yml

  • Stores data in /var/lib/alertmanager.

  • --web.listen-address=0.0.0.0:9093 to allow external access from our browser

[Install] Ensures the service starts automatically on boot under multi-user mode

Since WE 're deploying on a VPS and accessing from outside, use the updated version of the service file We used a slightly upgraded Alertmanager service file from the one shared in class, which includes --storage.path to prevent startup errors and --web.listen-address=0.0.0.0:9093 to allow external access from our browser.” so now to get our alertmanager fully up and running we will Create the data directory with command " sudo mkdir -p /var/lib/alertmanager " then change ownership with command " sudo chown -R alertmanager:alertmanager /var/lib/alertmanager "

After pasting, press: CTRL + O to save, Enter to confirm, CTRL + X to exit

Step 9: Stop Prometheus

After we have updated the alertmanager config file, we will have to stop Prometheus with command " sudo systemctl stop prometheus "

Then we navigate Back to Our Original Directory that Housed Prometheus with command. " cd ~/monitoring-stack

Then we update Prometheus config to include Alertmanager, we will Open the Prometheus config file run command " sudo nano Prometheus/prometheus.yml "

Edit the alerting section (remove the commented-out one if it's still there) and make sure this is included under alerting:

alerting:

alertmanagers:

- static_configs:

- targets:

- "0.0.0.0:9093"

Why use 0.0.0.0:9093 instead of localhost:9093 it is Because you're running everything on a VPS (not a local VM), and you want to access Alertmanager from outside the VPS (e.g., browser or tools), then binding to 0.0.0.0:9093 is the correct approach.

Step 10. Reload systemd so it picks up the new service

Now that the Alertmanager service file is ready, we need to reload systemd, start the Alertmanager service, enable it on boot, and check its status.

Firstly , we ll run the command " sudo systemctl daemon-reexec " This tells systemd to restart itself — not the services, just the manager. Why this is important : Sometimes after upgrading system components or making low-level changes, you need systemd to "re-execute" itself so it can operate with the latest binary and environment. For our case, this step is optional but safe — it just ensures the system manager itself is clean and up-to-date.

Then Run command "sudo systemctl daemon-reload " This command reloads all the service unit files (like our alertmanager.service) so systemd can pick up the latest changes.

Why it's important: Every time you edit a service file (like we just did to add --storage.path and --web.listen-address), systemd doesn’t automatically know you made a change. Running this command tells systemd to re-read all those .service files from scratch, Without this, your edits won’t take effect.

Then we run this command " sudo systemctl restart alertmanager " This command stops the running Alertmanager service (if it's already running) and then starts it again.

Also we will check that the service is running correctly by running this command " sudo systemctl status alertmanager " it Should Show active and Running.

Then Finally we have to open Port 9093 on UFW to access our alertmanager on a browser outside our VPS.

Run command " sudo ufw allow 9093 and " sudo ufw reload "

Since ufw was inactive earlier and you activated it, remember to also open SSH (port 22) so you don’t lock yourself out: command is " sudo ufw allow ssh " and " sudo ufw enable "

Step 11. Installing Grafana Enterprise v12.0.1 on Ubuntu VPS

Run the command " sudo apt-get install -y adduser libfontconfig1 musl " This installs essential libraries (libfontconfig1 for font rendering, musl for compatibility) and adduser for creating the Grafana service user. These are required for Grafana to run smoothly.

Then Download Grafana cd /tmp/ moves you into the temporary directory. This is where we’ll download and install Grafana from, but we won’t be storing it here permanently. so command to run is " cd /tmp/ "

Then download gafana with command wget https://dl.grafana.com/enterprise/release/grafana-enterprise_12.0.1_amd64.deb

Then Install Grafana with command " sudo dpkg -i grafana-enterprise_12.0.1_amd64.deb.2

Start Grafana with command " sudo systemctl start grafana-server "

Enable Grafana to start on boot " sudo systemctl enable grafana-server "

Check the Status if Grafana with command " sudo systemctl status grafana-server "

Now Since Our main Aim from the beginning of this project is to access all our resources outside our VPS server , now we have to set rules for Grafana to access 0.0.0.0:3000 like we did for Prometheus and Alertmanager earlier.

So we run this command to open up Grafana's configuration " sudo nano /etc/grafana/grafana.ini “ then search with (Ctrl+W) for this section ;http_addr = and change it to http_addr = 0.0.0.0

Why? 0.0.0.0 allows Grafana to accept traffic from any IP address — which is necessary if you're accessing it via the public internet.

Then Restart Grafana to apply the config with command " sudo systemctl restart grafana-server "

Then Open Port 3000 in UFW (Uncomplicated Firewall), If UFW is enabled on your VPS, you need to allow port 3000 command is " sudo ufw allow 3000 "

Verify access in browser Now, open your browser on your local device and go to http://VPS_PUBLIC_IP:3000

You should see the Grafana login screen.

Login with:

Username: admin

Password: admin (you’ll be prompted to reset it).

Step 12. Add Prometheus as a Data Source in Grafana

Since we are already on Grafana web page , we ll login with this credentials. Username: admin Password: admin (you will be prompted to change it)

Then Add Prometheus as a Data Source

  1. From the left side bar click connections under connections click Data Sources.

  2. Click the blue "add Data Source" Button

  3. Select Prometheus

  4. Under HTTP>URL, enter :http://your_vps_ip:9090

  5. Leave everything else as default and scroll down.

  6. Click “Save & Test”

You should see a message: “Successfully queried the Prometheus API.

Step 13. Add Endpoints to Prometheus

What are we doing? We are telling Prometheus to also monitor metrics from:

Alertmanager (running on port 9093)

Grafana (running on port 3000)

Now we will go back and open Prometheus config file, on the terminal run this command "sudo nano ~/monitoring-stack/Prometheus/prometheus.yml" it opens up Prometheus configuration.

N.B: "Since we installed Prometheus manually and not from the official package repo, the config file isn’t at the default system path (/etc/prometheus/prometheus.yml). Instead, we’re using a custom path inside our monitoring-stack directory where we extracted Prometheus.

Now paste these new job configs at the bottom of the scrape_configs section :

  • job_name: "alertmanager"

  • static_configs:

  • job_name: "grafana"

  • static_configs:

    • -targets: ["YOUR_VPS_PUBLIC_IP:3000"]

Once done, save and restart Prometheus with : " sudo systemctl restart prometheus " Then run

" sudo systemctl status Prometheus " to check if its Active and Running.

What's Next:

  1. Visit Prometheus Dashboard go to: http://your_vps_public_ip:9090

  2. Click Status > Targets

  3. You should see:

    prometheus target ✔

    alertmanager target ✔

    grafana target ✔

Conclusion

In this project, we set up a full-stack monitoring solution using Prometheus, Alertmanager, and Grafana on an Ubuntu-based VPS. We followed a hands-on approach aligned with my tutor’s instructions, made key adjustments to bind services to 0.0.0.0 for external accessibility, and overcame real-world configuration challenges along the way.

From installing Prometheus and integrating Alertmanager for alerts, to setting up Grafana for rich visualization — this setup provides powerful observability into system metrics. It’s more than just a tutorial; it’s a real-world DevOps scenario, and I’m glad to have documented each phase for the community.

Whether you’re a DevOps enthusiast, beginner, or cloud learner, I hope this blog empowers you to deploy your own monitoring stack with confidence.

Let’s keep building, keep shipping, and keep documenting every step of the journey

Thank you Guys see you on the next one .

Tech Man.

0
Subscribe to my newsletter

Read articles from Stillfreddie Techman directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Stillfreddie Techman
Stillfreddie Techman