From Duplication to Unification: How Cloudflare Workers Helped Us Centralize Shared Logic

Sushant GulatiSushant Gulati
6 min read

Problem

In our codebase, we had a very specific parser responsible for converting meta JSON created on front-end into a format ingestible by our backend. The parser was exclusively utilized by the frontend until recently. However, while we were building our AI suite, another use-case emerged, where the backend also needed to use the same utility. Since this parser undergoes frequent updates, maintaining separate copies of the same logic in different languages is a maintenance nightmare - it's error prone & increases unnecessary dependencies between teams.

At Certa, we believe in building scalable processes - inside & out. So, we decided to build a centralized solution.

The Ideal Solution

The ideal solution to this problem would have the following characteristics:

  • Seamless Integration: It had to integrate smoothly with our existing codebases, minimizing the need for extensive restructuring or managing multiple repositories.

  • Maintainability: Easily maintainable and automatable, allowing for straightforward updates and integration with deployment pipelines.

  • Scalability: The solution had to scale effortlessly to accommodate fluctuations in demand, ensuring consistent performance under varying loads.

  • Low Latency: Ensuring low latency was paramount for optimal performance, as every millisecond counts in delivering a seamless UX. This was crucial for maintaining our high standards of customer satisfaction.

  • Cost-Effectiveness: The solution needed to be cost-effective, providing a balance between performance and expense.

With these characteristics identified and a bit of research, we arrived at 3 potential solutions:

  1. Shared Library

    Initially, publishing a library on NPM seemed like a straightforward solution. However, our backend is driven by Django, which presented significant hurdles. While workarounds such as wrappers or transpilers exist, they introduce unnecessary complexities. Although we could automate some of this through CI/CD pipeline, but the compatibility issues remained a concern.

  2. WASM Module

    To address the language compatibility issues, we considered converting our code to AssemblyScript. We could set up a CI/CD pipeline that would allow us to compile it into a WASM module and upload it to a CDN. But this solution had a few challenges of its own. The most notable one being the latency that is introduced when passing large data from JavaScript to WASM (and vice versa). This latency would be noticeably worse when compared to function calls to the same parser, within JavaScript.

  3. Serverless Functions

    Serverless functions like AWS Lambda seemed to be one of the better options, compared to the other two, for our use-case. The code can exist in our frontend repository, which can be deployed whenever it is changed and pushed. The serverless functions provided several benefits like:

    • Automatic Scaling based on demand

    • Cost-efficient operation, especially for tasks that are not constantly running

    • An option to deploy on edge for ultra low latency in all regions around the world

However, there's still the issue of cold starts. A “cold start” refers to the duration required to initialize and execute a fresh instance of a serverless function. The delay caused by cold starts, especially in scenarios with infrequent invocations, could impact user experience.

Solution: Cloudflare Workers

While exploring our serverless options, Cloudflare Workers emerged as the ideal solution for our use case, meeting all our criteria. With its 0ms cold starts, this edge service allows our backend microservices to access the transformational code almost instantaneously. Read more on how Cloudflare Workers is able to eliminate cold starts; it's inspiring to see how they've used isolates to eliminate cold-starts!

How did we use it?

Initial Setup

  1. We started off by adding a new package into our front-end monorepo as this code clearly needed to be independent of the rest our front-end logic.

  2. Within this package, we initialized a Cloudflare Workers project and moved the parsers' logic into it, exposing an endpoint accessible to external services.

  3. To have a cleaner and maintainable code, we used itty-router library, which offered a more elegant solution compared to nested if-else statements in the Worker's fetch event handler.

  4. Deployment was done using wrangler deploy command, which was installed when we initialized the Cloudflare Workers project.

    It looked something like this:

     import type { IRequest } from "itty-router";
     import { Router, json, error, createCors } from "itty-router";
    
     const router = Router({ base: "/api" });
    
     const { preflight, corsify } = createCors({
       origins: ["*"],
       methods: ["POST", "OPTIONS"]
     });
    
     router
       // embedding preflight upstream to handle all OPTIONS requests
       .all("*", preflight)
       .post("/tranform/", yourTransformController)
       .all("*", () => error(404, "Invalid endpoint"));
    
     export default {
       fetch: (req: IRequest) =>
         router
           .handle(req)
           .then(json)
           .catch(error)
           // corsify all Responses (including errors)
           .then(corsify)
     };
    

Benchmarks

After completing the initial setup and deployment of the parser on Cloudflare Workers, we conducted testing to verify its functionality. Our tests focused on assessing the round-trip time (RTT).

These tests were performed on a MacBook M1 Pro with a high-speed internet connection over WiFi.

Payload sizeLocal function invocationProcessing time (Remote)Remote Round Trip Time
100KB3.04ms3.09ms207ms
1.5MB22.06ms21.98ms1511ms
9MB113.75ms115.11ms5217ms

Results

Our testing results showed that the parser was performing well within acceptable limits. Notably, the benchmarks involved a larger-than-usual payload size of 1.5MB. For typical payloads (less than 100KB), the round-trip time (RTT) was consistently around 200 ms.

Given these outcomes, we decided to proceed with this solution.

Finally, setting up CI/CD

The next step was to set up a CI/CD pipeline to ensure that any changes made to the parser would be automatically deployed on Cloudflare Workers. Deploying our project was straightforward using Cloudflare's GitHub Action, cloudflare/wrangler-action@v3.

To optimize the deployment process, we made a key modification. We configured the GitHub Action to trigger only when changes were made to the specific folder containing our parser. This targeted approach prevents unnecessary deployments and ensures that the action only runs when there are relevant updates.

on:
  push:
    branches:
      - development
    paths:
      # Only run the workflow when changes are made to our code
      - "src/parser/*"

..

Conclusion: How did we benefit from this?

  • Minimal Latency: Our system consistently achieves an average round trip of around 200ms for typical use cases to transform and return data on a decent network, complemented by 0ms cold starts, ensuring consistent and fast response times for all requests.

  • Reduced Maintenance: By utilizing a single codebase, we've significantly reduced maintenance efforts. Managing two repositories wouldn't just increase complexity but also add to management tasks without necessarily demanding more resources.

  • Scalability: The system seamlessly handles traffic spikes, without any degradation in performance. The deployment across 200 cities ensures that we can provide consistent service to users globally.

  • Cost Savings: In contrast to other serverless options, Cloudflare Workers charge based on CPU time instead of execution duration, which includes both active processing and idle waiting times. For our expected usage of 20 million requests and 100 million CPU milliseconds per month, the total expected monthly cost is only $9.40, significantly lower than alternative serverless options.

We are hiring!

If solving challenging problems at scale in a fully remote team interests you, head to our careers page and apply for the position of your liking!

1
Subscribe to my newsletter

Read articles from Sushant Gulati directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sushant Gulati
Sushant Gulati