NoGQL Web Apps

Ryan HowardRyan Howard
9 min read

What the heck is NoGQL?

"Surely it's a typo. He probably means NoSQL, right?"

It's not a typo, and apologies if you were hoping this was going to be a DB discussion. What I mean by NoGQL is any modern web application that doesn't use GraphQL or any other GQL variant to retrieve its data. In other words, I'm referring to any browser-based app that fetches data from a web server the old-fashioned way: by explicitly calling endpoints in code—typically to REST APIs that return encoded JSON—using a client or the native fetch API.

So, is it bad to be NoGQL?

Does being NoGQL mean your frontend application architecture is seriously behind the times? No doubt, there are those who think so, but allow me to opine that nothing could be further from the truth. Like all other major technology decisions, it depends on a plethora of scenarios that must be considered, weighed, and prioritized. This article will not get into those decision points—maybe in a future post if there's interest—but for now I'm addressing teams whose stacks are already NoGQL and will remain so for the foreseeable future.

And for good measure, we'll throw seed startups into the target-mix since setting up a GQL infra layer is often not in the forefront when incubating backend-driven intellectual property. Now that we've established this article's intended audience and set the record straight that NoGQL is a-okay, let's dig in a little.

Is the Fetch API good enough on its own?

That's an excellent question. The Fetch API is certainly robust and has come a long way since its evolution from XHR. Its standardization and native availability in all major browsers are an obvious testament to this fact, with origins that can be traced back to old-school XML remoting. I'm long enough in the tooth to boast (or accidentally let slip) that back in 2001, my team shipped one of the first enterprise-level, browser-based apps of its kind that exclusively fetched its data from Internet Explorer via DHTML/JScript/WSH calls to remote XML APIs. Alas, I digress...but the point is that we have it pretty good nowadays when it comes to building data-driven apps in the browser.

However, we must consider the complexity of today's single-page applications and the modern rendering libraries used. As such, there are some elephants in the room that we should address.

Throttling

You may be thinking, "What does server-side throttling have to do with this?" and the answer is nothing and at the same time, everything. I'm referring to throttling on the frontend, which if you don't have API throttling on the backend, then considering it on the frontend is even more important.

With today's UI rendering frameworks that utilize virtual DOM technology that manage performance within render loops, it's not so easy to accurately estimate/manage how often code execution will repeat in the context of a view. Take function components in React as an example—which I believe are awesome for many reasons—but if you want to call a data API within a function component, then you must be super careful, otherwise that rendering loop will cause multiple duplicate API requests to execute.

You have several options available to mitigate this risk, but they all must ensure that duplicate API calls do not occur within a desired retrieval context or via a time-to-live (TTL) setting. And this is what I mean by frontend throttling, which can come in the form of raw throttling (just brute force not allowing duplicate calls) but can also be handled by some sort of caching strategy, which is our next topic.

Caching

Once again, we're dealing with an overloaded term, but here I'm referring to the caching of "API data" on the frontend (i.e. in the context of the browser). A good caching strategy will naturally throttle calls because it will return the cached data instead of making the actual outgoing call. Like raw throttling, there are many options available out there, but what is most important before deciding the how is first deciding the what, which in this case is your call frequency strategy.

Call Frequency Strategy

"So, when should I throttle and when should I cache? Should I just always cache and not worry with raw throttling? Is there any scenario where I just don't need to worry about either?"

All good questions. Let me give you my opinionated answer via simple conditional construct:

  • If the data comes from a GET request and is primarily used by only one view/component at a time within your application AND it is not utilized/needed anywhere else within a reasonable timeframe (say five seconds for argument's sake), then I'd suggest using a throttle-based frequency strategy.

  • If the data comes from a GET request and is concurrently utilized in multiple places in your application within that same reasonable timeframe, the I'd suggest using a cache-based frequency strategy.

  • Finally, if the endpoint call is something other than a GET request, then always use a throttle strategy.

Note that in my opinion above, there's never a scenario where I believe it's okay to opt out of some sort of call frequency strategy. I don't think you should ever assume that backends will suss out duplicate API operations; even if you're a full-stacker who also maintains the API codebase, I'd advise that you assume that later down the road your backend may not always ensure idempotency. In other words, at the very least, ensure your calls on the frontend are somehow throttled.

The only scenario that I would consider bypassing this design philosophy is if the application in question is so dead-simple that it would be considered over-engineering. Personally, I have not encountered either of those exceptions in the last five years of my SPA-focused experience, but as in all things there are always exceptions to the rule.

State Interpolation

Another aspect of modern frontend data flows that fetch() doesn't do much for us is state interpolation. Most—if not all—NoGQL apps must deal with model munging and translating data from app state to API state and vice versa. Just like a good router knows how to deal with templated paths containing path/query params, so should a good frontend API client.

Automated state interpolation not only keeps your code DRYer, but it also makes endpoint consumption in the frontend much easier while practicing good separation of concerns. The last thing you want is to review code and see patterns like endpoint manipulation anywhere close to the UI code. Tsk-tsk.

We mentioned the importance of templated endpoint URLs, but even more important is ability to deal with the actual data-model mapping. Backend data models hardly ever ideally match frontend state models. This is one thing that GraphQL really shines at addressing, but since we're NoGQL, model mapping is one of those elephants in the room to discuss.

It would be hard to argue against keeping all model-mapping logic in one place, so why not add this to the concerns of the API client? I'm of the opinion that it's a great fit and so I strive for my API clients to have mechanisms built in to seamlessly deal with model translation. This is especially important if you use a static typed approach to your data (e.g. TypeScript). So, we can add state interpolation to our API client requirements since it's something that fetch doesn't help with beyond parsing the JSON response body.

Result Mapping

In the same vein as state interpolation is result mapping. During my long-lived career of writing software, I spent about half of my time heads-down in the backend doing a lot of API development. I can tell you that on some of the best API project teams I've worked on, one of the first things we did was standardize on how we return data to clients across all our endpoints, which we used to call our "response envelope".

I've seen many (too many) APIs that just fallback on basic HTTP standards and depend on status codes and raw errors/data stuffed directly in root of the response body. In my opinion, it is far better for an API to return a response envelope that returns metadata along with the requested data. I tend to think of those HTTP status codes as belonging to a network protocol and certainly not for an application protocol. As such, I preach to not to lean too hard on them on the application side of things. They're still invaluable for many reasons, so proper status codes should be adhered to, especially for instrumentation, but at the same time, let's give the frontend a little love.

And likewise, the frontend—in my opinion—should apply the same "envelope" pattern. I'm referring to standardizing on a single root meta-model format that is always used to translate response results into. Standardizing on this makes dealing with API interactions a pleasure in frontend code, especially when you need to provide proper feedback to a user in response to a request. And in terms of your app dealing with HTTP status codes, I believe that abstracting network status codes from UI code is a good thing. Which makes it another excellent use case for a good API client.

"Okay, what should go into this so-called root response meta-model?"

Well, the answer to that is up to you and your team, but I'm happy to share an example for clarity:

{
    "data": ["item1", "item2", "item3"],
    "error": "",
    "nextPageToken": "5b8924f2-5b46-4a88-86bf-c6b88f1135fa",
    "prevPageToken": "87987605-7e1e-4e9e-81ee-27182518aa53"
}

As you can see, the above response envelope is super simple while also being super powerful, because it creates a consistent and dependable structure to work with, regardless of how the backend API's response may vary from endpoint to endpoint.

You'll notice that apart from the self-explanatory data and error properties, we also have pagination tokens. This is where the response envelope will depend heavily on your backend design and what you need. These could be skip-based pagination properties instead of token-based, or an optional combination of both. Also, there could be other metadata you'd like to return such as correlation ID, etc. It's for you and your team to decide, but the primary data and error properties you'll probably standardize on, as GraphQL typically does as well.

No Eclipsing

The final point to talk about regarding an API Client in the context of a NoGQL app is to ensure that the abstraction in no way hides any of the Fetch API's native features. For example, Abort Controller capability is a feature that few readers have probably ever used, but there's full support for it in the Fetch API. Likewise, any API client abstraction built on top of fetch should also expose this and all functionality. Remember, abstractions should focus on improving without necessarily removing.

Should I write my own abstraction?

That is a question only you can answer. Keep in mind that in this day-and-age of awesome open-source-ness, reinventing the wheel should be avoided unless there's an awful good reason to do so. As such, I'd do some research as there are plenty of API clients out there to choose from. Just make sure they tick all the boxes.

And finally, this is where I insert my shameless plug and reveal the transparent motive behind this article. I recently wrote and released spa-tools, a suite of open source tools aimed at helping engineers with developing their single-page applications. One of the tools in that suite is indeed an API Client. I'd welcome you to kick the tires and see if it's a good fit for your NoGQL app.

Cheers and until next time,

Ryan

0
Subscribe to my newsletter

Read articles from Ryan Howard directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ryan Howard
Ryan Howard