How to Build a Web Application Using Rust, Axum, Prometheus and Grafana

James KesslerJames Kessler
8 min read

Axum is an async web hosting framework for rust. In order to see how our app behaves in production, we need to set up monitoring and alerting.

Prometheus is an open-source systems monitoring and alerting toolkit. It is designed to collect metrics from configured targets at given intervals, evaluate rule expressions, display the results, and trigger alerts if certain conditions are observed. Prometheus is particularly well-suited for monitoring dynamic cloud environments and microservices architectures due to its ability to handle high-dimensional data and its powerful query language, PromQL.

Grafana is an open-source platform for monitoring and observability. It provides a rich set of features for visualizing time-series data, which makes it an excellent tool for creating dashboards and graphs to display metrics collected by Prometheus. Grafana supports a wide range of data sources, including Prometheus, and allows users to create dynamic and interactive dashboards.

Axum is not set up to work with Prometheus out of the box, but with a few steps we can set it up to do so and also test it out locally with a docker image for Prometheus and Grafana.

Let’s see how we can set up a basic Axum web server with the LGTM stack in rust.

Axum

We’ll begin by setting up an axum server based on the hello-world example in the Axum repository, but if you already have an Axum server set up, feel free to skip this part.

Create a Cargo.toml file at the root of your project

[package]
name = "rust-lgtm"
version = "0.1.0"
edition = "2024"
publish = false

[dependencies]
axum = "0.8.3"
tokio = { version = "1.44.2", features = ["full"] }

And add a main.rs file. I’ll be creating mine at /src/main.rs

use axum::{response::Html, routing::get, Router};

#[tokio::main]
async fn main() {
    // build our application with a route
    let app = Router::new().route("/", get(handler));

    // run it
    let listener = tokio::net::TcpListener::bind("127.0.0.1:3000")
        .await
        .unwrap();
    println!("listening on {}", listener.local_addr().unwrap());
    axum::serve(listener, app).await.unwrap();
}

async fn handler() -> Html<&'static str> {
    Html("<h1>Hello, World!</h1>")
}

Verify this works by running cargo run and visiting http://localhost:3000. We should see our very simple message!

Set Up Prometheus Handler

Now we will add a route to serve prometheus metrics. The default endpoint prometheus expects metrics to be served from is /metrics.

Create a new file /src/metrics_router.rs with following content:

use axum::{Router, routing::get};
use metrics_exporter_prometheus::{Matcher, PrometheusBuilder, PrometheusHandle};
use std::future::ready;

/// Sets up and configures a Prometheus metrics recorder.
///
/// This function initializes a Prometheus metrics recorder with custom bucket
/// configurations for the `http_requests_duration_seconds` metric. The buckets
/// are defined as exponential intervals in seconds.
///
/// # Returns
/// A `PrometheusHandle` that can be used to render the metrics in a Prometheus-compatible format.
fn setup_metrics_recorder() -> PrometheusHandle {
    const EXPONENTIAL_SECONDS: &[f64] = &[0.005, 0.01, 0.025, 0.05, 0.1];

    PrometheusBuilder::new()
        .set_buckets_for_metric(
            Matcher::Full("http_requests_duration_seconds".to_string()),
            EXPONENTIAL_SECONDS,
        )
        .unwrap()
        .install_recorder()
        .unwrap()
}

/// Creates an Axum router with a `/metrics` endpoint.
///
/// The `/metrics` endpoint serves Prometheus metrics in a text format. The
/// metrics are collected using the Prometheus recorder configured by
/// `setup_metrics_recorder`.
///
/// # Returns
/// An `axum::Router` instance with the `/metrics` route configured.
pub fn create_router() -> Router {
    let recorder_handle = setup_metrics_recorder();
    Router::new().route("/metrics", get(move || ready(recorder_handle.render())))
}

Now we’ve got a router ready to serve metrics, but now we need a middleware to record metrics. Create another new file /src/metrics_middelware.rs as below:

use axum::body::Body;
use axum::{extract::MatchedPath, http::Request, middleware::Next, response::IntoResponse};
use metrics::{counter, histogram};
use std::time::Instant;

/// Middleware to track HTTP request metrics.
///
/// This middleware records metrics for each HTTP request, including:
/// - Total number of requests (`http_requests_total`)
/// - Request duration in seconds (`http_request_duration_seconds`)
///
/// The metrics are labeled with the HTTP method, request path, and response status code.
///
/// # Arguments
/// * `req` - The incoming HTTP request.
/// * `next` - The next middleware or handler in the chain.
///
/// # Returns
/// The HTTP response after processing the request.
pub async fn track_metrics(req: Request<Body>, next: Next) -> impl IntoResponse {
    // Record the start time of the request.
    let start = Instant::now();

    // Extract the matched path from the request extensions, or fall back to the URI path.
    let path = if let Some(matched_path) = req.extensions().get::<MatchedPath>() {
        matched_path.as_str().to_owned()
    } else {
        req.uri().path().to_owned()
    };

    // Clone the HTTP method for labeling.
    let method = req.method().clone();

    // Pass the request to the next middleware or handler and await the response.
    let response = next.run(req).await;

    // Calculate the request latency in seconds.
    let latency = start.elapsed().as_secs_f64();

    // Get the response status code as a string.
    let status = response.status().as_u16().to_string();

    // Define labels for the metrics.
    let labels = [
        ("method", method.to_string()),
        ("path", path),
        ("status", status),
    ];

    // Increment the counter for total HTTP requests.
    counter!("http_requests_total", &labels).increment(1);

    // Record the request duration in the histogram.
    histogram!("http_request_duration_seconds", &labels).record(latency);

    // Return the response.
    response
}

Now we need to return back to main.rs to add the new /metrics route and middleware. I also like to update the console logs for easy navigation from my shell.

mod metrics_middleware;
mod metrics_router;

use axum::{middleware, response::Html, routing::get, Router};

#[tokio::main]
async fn main() {
    // build our application with a route
    let app = Router::new()
        .route("/", get(handler))
        // introduce the metrics middleware before adding metrics, since we
        // want to exclude the /metrics route from prometheus
        .route_layer(middleware::from_fn(metrics_middleware::track_metrics))
        // merge the new router with the main router, introducing the /metrics endpoint
        .merge(metrics_router::create_router());

    // run it
    let listener = tokio::net::TcpListener::bind("127.0.0.1:3000")
        .await
        .unwrap();
    println!("listening on http://{}", listener.local_addr().unwrap());
    println!("prometheus metrics at http://{}/metrics", listener.local_addr().unwrap());
    axum::serve(listener, app).await.unwrap();
}

async fn handler() -> Html<&'static str> {
    Html("<h1>Hello, World!</h1>")
}

Finally, update Cargo.toml with the new dependencies

[package]
name = "rust-prometheus-grafana"
version = "0.1.0"
edition = "2024"
publish = false

[dependencies]
axum = "0.8.3"
metrics = "0.24.2"
metrics-exporter-prometheus = "0.17.0"
tokio = { version = "1.44.2", features = ["full"] }

Now test what we’ve got with cargo run. Visiting /metrics will result in a blank page until the main hello world route is visited. After visiting the home page our metrics should be visible at /metrics

# TYPE http_requests_total counter
http_requests_total{method="GET",path="/",status="200"} 1

# TYPE http_request_duration_seconds summary
http_request_duration_seconds{method="GET",path="/",status="200",quantile="0"} 0.000207209
http_request_duration_seconds{method="GET",path="/",status="200",quantile="0.5"} 0.00020722611180952634
http_request_duration_seconds{method="GET",path="/",status="200",quantile="0.9"} 0.00020722611180952634
http_request_duration_seconds{method="GET",path="/",status="200",quantile="0.95"} 0.00020722611180952634
http_request_duration_seconds{method="GET",path="/",status="200",quantile="0.99"} 0.00020722611180952634
http_request_duration_seconds{method="GET",path="/",status="200",quantile="0.999"} 0.00020722611180952634
http_request_duration_seconds{method="GET",path="/",status="200",quantile="1"} 0.000207209
http_request_duration_seconds_sum{method="GET",path="/",status="200"} 0.000207209
http_request_duration_seconds_count{method="GET",path="/",status="200"} 1

Prometheus + Grafana Containers

Both Prometheus and Grafana have different hosting options, but for this tutorial we’ll use a docker compose file to set both up.

Create a prometheus configuration file at the root of your project called prometheus.yml and add the following configuration to it:

global:
  scrape_interval: 15s # By default, scrape targets every 15 seconds.

scrape_configs:
  # Monitor the Axum server
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: hello-world
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
      - targets: [ "host.docker.internal:3000" ]

  # Configure Prometheus to monitor itself
  - job_name: "prometheus"
    static_configs:
      - targets: [ "localhost:9090" ]

Here we’ve set up prometheus to monitor the axum server. Note that host.docker.internal will resolve to the host machine on linux and mac.

Now create a docker-compose.yaml file at the root to run prometheus and grafana. We’ll mount the data folders in each container to docker volumes that will be created if they do not already exist. We also need to map tha grafana port of 3000 to 3001 on the host network since the axum hello-world app is already on port 3000.

services:
  grafana:
    image: grafana/grafana
    ports:
      - "3001:3000"
    volumes:
      - grafana-data:/var/lib/grafana
  prometheus:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
volumes:
  grafana-data:
    external: false
  prometheus-data:
    external: false

Start the containers with docker compose up

Prometheus

visit http://localhost:9090/targets. The hello-world and prometheus targets should both be up and green.

On the query tab, we can query different metrics coming from the hello world target. Try a basic query like http_requests_total{job="hello-world"} which will tell us the total requests made by endpoint and method.

Grafana

Next, visit http://localhost:3001. This should bring up Grafana’s login page. The default login and password are admin/admin for a fresh new docker container, but this should be changed if this Grafana container is hosted anywhere that is publicly accessible.

After logging in, navigate to http://localhost:3001/connections/datasources and set up a Prometheus data source as below. The host is set to prometheus here because that is the name of the container within the docker network. This tutorial ignores authentication, but consider authenticating metrics in both Prometheus and Grafana for any publicly accessible hosted instance.

Now create a dashboard by navigating to the Dashboards page and clicking the + Create Dashboard button.

Click + Add visualization

Choose prometheus as the datasource

Switch to code and add http_requests_total and hit Run queries

We can see the requests made to the hello world service!

By integrating Rust with Axum, Prometheus, and Grafana, you can create a robust web application that not only performs efficiently but is also well-monitored and observable. This setup allows you to gain valuable insights into your application's performance and behavior in real-time, enabling you to make informed decisions and quickly address any issues that arise. With the power of Prometheus for metrics collection and Grafana for visualization, you can ensure your application remains reliable and scalable, providing a seamless experience for users. Whether you're deploying in a cloud environment or managing microservices, this combination of tools offers a comprehensive solution for building and maintaining high-quality web applications.

For the full github repo, visit https://github.com/vicero/rust-prometheus-grafana

Next time we’ll set up an alert monitor in Prometheus!

0
Subscribe to my newsletter

Read articles from James Kessler directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

James Kessler
James Kessler

I'm a software developer with over 20 years of experience building robust, scalable systems across a range of industries.