Run BGPKIT on Cloudflare Containers

Mingwei ZhangMingwei Zhang
5 min read

For the longest time, I’ve been using Cloudflare exclusively for web/API hosting and relatively lightweight tasks. For the most of my work on BGP, there is really not much that I can accomplish with just JavaScript/TypeScript (except maybe working with RIS Live WebSocket). The computationally intensive nature of the most BGP data processing doesn't naturally fit within the typical Cloudflare developer platform.

This changes with the recent announcement of Cloudflare Containers. In short, it allows developers to build and run custom containers on the Cloudflare’s platform, enabling the heavy workload to mix with other primitives and unify the deployment.

In this blog post, I will show you how to build a BGP data search API with BGPKIT and deploy on Cloudflare Containers. The source code is available on GitHub.

BGP Data Search API

For this example, I will show you how to build a very straightforward HTTP API to allow accepting search parameters and let BGPKIT to fetch and parse BGP archives and return the parsed messages.

The API accepts four parameters: collector, prefix, ts_start, and ts_end, to filter and parse BGP archives efficiently.

  • collector: the BGP route collector ID to use (e.g. rrc00 or route-views2). We want to limit the search to a single collector.

  • prefix: the prefix of the BGP relevant updates. Open-ended search will burn up resources quick, but you can opt-out this requirement.

  • ts_start and ts_end: the starting and ending timestamps. The goal is to limit the search to end up parsing a very small amount of MRT files (e.g. give them the same value will do). We should probably leave large-scale data crunching to a environment with more CPU powers.

The parameters are defined in a struct to be able to passed in to a axum “GET” route:

#[derive(Deserialize, Serialize)]
struct Params {
    collector: String,
    prefix: String,
    ts_start: String,
    ts_end: String,
}

With the given parameters, we will first find the relevant MRT files by setting the filters of timestamps and collector to BgpkitBroker instance:

        let files = match bgpkit_broker::BgpkitBroker::new()
            .ts_end(ts_end.clone())
            .ts_start(ts_start.clone())
            .collector_id(collector.clone())
            .query(){
            Ok(items) => items,
            Err(e) => {
                return Json(Result {
                    error: Some(e.to_string()),
                    data: vec![],
                    meta: None,
                });
            }
        };

For each file, we will parse the whole MRT file and collect BGP updates that is relevant to the target prefix:

        for file in files {
            let mut parser = match bgpkit_parser::BgpkitParser::new(file.url.as_str()){
                Ok(parser) => parser,
                Err(e) => {
                    return Json(Result {
                        error: Some(e.to_string()),
                        data: vec![],
                        meta: None,
                    });
                }
            };

            parser = match parser.add_filter("prefix", prefix.as_str()){
                Ok(parser) => parser,
                Err(e) => {
                    return Json(Result {
                        error: Some(e.to_string()),
                        data: vec![],
                        meta: None,
                    });
                }
            };
            items.extend(parser.into_elem_iter());
        }

Because the BGPKIT parser and broker code are implemented in the sync environment, we will need to wrap the code above in a blocking thread in order to use it in async web frameworks like axum.

let result = tokio::task::spawn_blocking(move || {
   // THE BLOCKING CODE PIECES
}).await.unwrap();

Please see the full source code here for more details:

https://github.com/bgpkit/bgpkit-cf-containers/blob/main/container-src/src/main.rs

Cloudflare Containers Deployment

Now that we have an Rust-based API code working, we will need to 1. containerize the code, 2. put a Cloudflare Container wrapper around it for deployment.

The container definition is a typical two-stage build definition, with a builder stage to build the binary application, and a minimum deployment stage to call the executable. The two-stage build is almost necessary as Cloudflare Containers has limits on the size of each image and the total storage per account. The smaller the image the better.

# ---- Build Stage ----
FROM rust:1.86 AS builder

WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y pkg-config libssl-dev

# Build application
COPY container-src/Cargo.lock container-src/Cargo.toml ./
COPY container-src/src ./src
RUN cargo build --release

# ---- Runtime Stage ----
FROM debian:bookworm-slim

# Install minimal runtime dependencies
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*

WORKDIR /app

# Copy the statically linked binary from the builder
COPY --from=builder /app/target/release/bgpkit-cf-container /app/bgpkit-cf-container

EXPOSE 3000

CMD ["/app/bgpkit-cf-container"]

The rest of the task is to build a API app for Workers and configure Containers. The following config is pretty much all it needs to configure the Workers script to build and push the container image, and create a Durable Object to coordinate and run Containers.

    "containers": [
        {
            "class_name": "BgpkitContainer",
            "image": "./Dockerfile",
            "max_instances": 5
        }
    ],
    "durable_objects": {
        "bindings": [
            {
                "class_name": "BgpkitContainer",
                "name": "BGPKIT_CONTAINER"
            }
        ]
    },
    "migrations": [
        {
            "new_sqlite_classes": [
                "BgpkitContainer"
            ],
            "tag": "v1"
        }
    ]

The main Workers script is only 22 lines long:

import { Container, getContainer } from '@cloudflare/containers';
import { Hono } from "hono";

export class BgpkitContainer extends Container {
    defaultPort = 3000;
    sleepAfter = '5m';
}

// Create Hono app with proper typing for Cloudflare Workers
const app = new Hono<{
    Bindings: { BGPKIT_CONTAINER: DurableObjectNamespace<BgpkitContainer> };
}>();

app.get("/search", async (c) => {
    if (!c.req.query('collector') || !c.req.query('prefix') || !c.req.query('ts_start') || !c.req.query('ts_end')) {
        return c.json({ error: "Missing required query parameters: collector, prefix, ts_start, ts_end" }, 400);
    }
    const container = getContainer(c.env.BGPKIT_CONTAINER);
    return await container.fetch(c.req.raw);
});

export default app;

The important pieces are:

  • class BgpkitContainer extends Container block defines the port to use and configures how long the container should run after the last query. In this example, containers will be killed after 5 minutes of inactivity. It is crucial to realize that Cloudflare Containers are not a drop-in replacement of some other container deployment platforms like fly.io or Railway, as the workload for Containers are intended to be short-lived (ping me if this changes) and scales horizontally depending on the requests amount.

  • the getContainer function here tries to reach a container. If the intended container is overloaded, it may create a new container on demand. You may choose to use the getRandom function to round-robin containers. See the docs for more.

  • the container.fetch(c.req.raw) forwards the query to the container, including the query parameters, which will then be handled by the running container.

Example Queries

The following example will reach the Workers script (handled by Hono), and then reach the container to run the actual BGP data crunching task. (This URL won’t actually work as we don’t have budget to provide such service openly. Feel free to deploy it on your own account to try it out.)
https://EXAMPLE.bgpkit.workers.dev/search?collector=rrc00&prefix=1.1.1.0/24&ts_start=1751231488&ts_end=1751231488

0
Subscribe to my newsletter

Read articles from Mingwei Zhang directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mingwei Zhang
Mingwei Zhang

Senior System Engineer at Cloudflare, founder and maintainer of BGPKIT. Build tools and pipelines to watch BGP data across the Internet.