Til 2025-03-28

ShinaBR2ShinaBR2
7 min read
  • Finally, my real-time notification system works. Thanks to Hasura. It just takes me 30 minutes or so for all the backend setup.

  • Uber finds the nearest driver not only based on latitude and longitude, they have an indexed grid of locations. Each grid is a hexagon shape, the reason why is more efficient to calculate the distance between 2 cells. This is brilliant 😃 Reference: https://www.uber.com/en-VN/blog/h3/

  • Hashnode doesn’t support real-time sync across devices but Notion does. But Notion doesn’t provide GraphQL API, that sucks.

  • Hono is a web framework that Cloudflare uses when I read through an example for authentication and authorization with Github. Not only the performance (see benchmark), the most important one is zero dependencies and uses only the Web Standards. Isolation is the top crucial design pattern to me. And it runs on all environments including Cloudflare Workers, with different JavaScript runtime including Deno or Bun. Hono has some different routers, the fastest one is RegExpRouter. Thanks to that, now I know that express uses a library called path-to-regexp which turns a path string such as /user/:name into a regular expression! Well, I was a bit confused about what to use for my backend routers at first, but a few seconds later, it was clear. As long as we don’t use REST, there are no such painful with tons of endpoints and suck structure. All I have in my backend are just webhooks, nothing else. More interestingly, Hono's initial design is NOT for NodeJS. It also supports zod validation. I like the Zod validation concept, but it’s painful when I work with it and this is just as straightforward as it should be.

  • There is an AI to create videos, see this incredible one.

  • Infrastructure engineers use Bash for everything. First principles.

    This is funny :D but true indeed

  • The maximum table size allowed in PostgresSQL is 32TB. There is an open-source extension for Postgres to deal with big Postgres scaling called citus. Good to know this.

  • One interesting moment I got today was when AI sucks with Rails migration when I switching between git branches. It destroyed all the data in the database when trying to solve the inconsistent state between the current database and the Rails migration status. Great!

  • DeepSeek R1 in Trae working so well! But Sonnet 3.7 is better.


Cloudflare MCP servers

Cloudflare now supports MCP servers on Cloudflare Workers. Until today, from what I know to interact with MCP servers, we need to “install” (download or whatever we call it) to our local machine, and tell the MCP clients (Claude desktop, IDE like Cursor, Windsurf, Trae, etc) to talk with them via specific configuration according to the MCP spec.

Remote is always better! Thanks to streamable HTTP transport. Basically, instead of installing software like 1990, now we have on cloud version for MCP servers.

What’s the difference and WHY remote is better?

Okay from the beginning of the development of MCP, it uses stdio (standard input and output). In simple words, it uses our local machine terminal and goes through shells to execute commands to talk with MCP servers and it’s fucking safe because it’s our local machine. But in the internet world, there are more risks and traditional attack mechanisms like man-in-the-middle are still there. There is no fucking safe place than 127.0.0.1. But I don’t want to install anything into any of my devices, I don’t understand what’s the reason for that. WHY do we need to install something just for listening to music or watching videos, what the fuck? This is 2025, Steve Jobs said “Internet in the pocket” more than 10 years ago.

Authentication and authorization

Authentication and authorization never be an easy problem. Many tools and platforms abstract the most complicated things for us. With this remote MCP server from Cloudflare, they also offer a solution for Authentication and Authorization on top of OAuth protocol. The entire process was pretty straightforward and familiar to me, just like how I did with Auth0 and my Hasura. Cloudflare introduced a TypeScript library to abstract this shit into simple API then we can work it and make sure to be spec-compliant, which is the most important thing in Authentication and Authorization.

That library API was clear since the minimal stuff I need in an Authentication and Authorization library is just the endpoints of the Authentication server, and a “handler”. If a library is designed with the plug-and-play style, we have the advantage of no vendor lock-in and are free to choose our authentication provider.

There is an interesting design, the MCP server issues their own token rather than just passing the token from the Identity Provider. This is important and I didn’t think about it before. Let’s take an example like this: you sign in to Gmail chat with the MCP clients and ask for some information from your Gmail account. If the MCP server just passes the OAuth token from the Google identity provider, when this token is compromised, the attacker can have more power to talk with no restriction to the MCP servers. That’s why the MCP server issues their own token with very limited permission and in case that token is compromised, the attacker can only get information that the token has permission to access.

Cloudflare Durable object

I was impressed immediately by the Cloudflare Durable object due to this

Each Durable Object has a globally-unique name

This is incredible, and because it’s from Cloudflare, it’s global, everything in Cloudflare means globally. This deserves a separate section because uniqueness is the most difficult problem in computer science for me. Every problem in computer science comes from this question: how to make an entity unique?

The fact is this Durable object is a Cloudflare Workers! In simple words, it’s a server. Cloudlfare uses this to support server-sent events (SSE) to keep the long live connection with the MCP clients, for real-time features. This is a stateful server.


WunderGraph and its story

WunderGraphq has raised $7.5 million in a Series A round of funding! I always believe GraphQL federation is the future of API. And this job is interesting.

The interesting part is from this blog.

Before that, I know the new cool thing: dbmate. It’s a framework-agnostic database migration tool! This is killer for traditional obsolete backend frameworks like PHP or even RoR. I wish I had known it sooner.

Back to the GraphQL usage monitor, we want an architecture to answer questions like what clients using which queries, and what part of it. This is due the flexible of GraphQL.

They build a service to analyze the request schema and send to Clickhouse, at first, just simple like pushing everything into Clickhouse and one day, performance become a problem then they consider to have a queue for that. Thanks to that, I know that even a fastest database like Clickhouse still suffer with many batch writing operations :)))

Solution for that is ClickPipes which is awesome.

The average latency from ingestion to availability in ClickHouse is now under 10 milliseconds.

This is incredibly fast.


Testing with TinyBird

I know Tinybird recently, it introduce some very useful tools to work with data, bigger data, and very big data using command prompt, and this is just a better way to mock the data for testing.

In testing, the most difficulty is randomness and we always try to mock the data as much as possible likely production data. At some points, I’m not sure between development and production environment, which one has more dirty data. But a reliable testing to me, it should cover as much as possible even some unrealistic scenarios. I don’t think, its unnecessary. When some parts of system down, this is not a fairy tail, this is a true story, the data become more unreliable and inconsistent. I don’t know at least for now, a good way to monitor this, there is something I can learn from here. Good thing!

With tinybird, mocking testing data just with this command

tb mock # Mock data
tb test # Run the test

And surely we want to run this CI server, not just our local machine. The good point is I don’t want to spend time for writing scripts to generate data that make sense for testing.


Hasura relationship

It’s awesome when I know that we can create many relationships, which lets the table state similar to Rails’s polymorphic concept. For example with this notifications table, I have 2 fields entity_id and entity_type. Then I can add object relationships to videos table, posts table, and the result would look like this

{
  "data": {
    "notifications": [
      {
        "id": "25e716ec-4396-4419-aead-77fd61583b28",
        "entityId": "0cc1aa5b-9a0e-4a3e-93a3-b8e26ad75dd1",
        "entityType": "video",
        "type": "video-ready",
        "readAt": null,
        "link": null,
        "metadata": null,
        "post": null, // id doesn't match any post
        "video": {
          "id": "0cc1aa5b-9a0e-4a3e-93a3-b8e26ad75dd1" // id of expected video record
        }
      }
    ]
  }
}

This is awesome and proves the Hasura flexibility since I don’t need to have a junction table. And with this, we can have all the data in a single request, which is the point of GraphQL.

0
Subscribe to my newsletter

Read articles from ShinaBR2 directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

ShinaBR2
ShinaBR2

Enthusiasm for building products with less code but more reliable.