Scaling Frontend Systems: Rethinking Our Test Strategy with MSW

Rohan BagchiRohan Bagchi
4 min read

“Scaling Frontend Systems” is a series of real-world engineering stories from the trenches of frontend development - focused on architectural migrations, testing overhauls, developer experience improvements, and lessons from operating frontend platforms at scale.

As of March 2024 I moved continents again to rejoin my previous organisation. While most of it was familiar, the domain the new team operated in was not.

The internal machinery behind our e commerce delivery stays all but hidden behind a fancy order status page. Clearly, in the words of Optimus Prime, "there's more to them than meets the eye".

The project I was to work with clearly needed some tending to. Mostly because we were to start a long running epic of building out some key aspects of warehouse operations. So, before a whole bunch of code gets added, improvements must be planned out. The part that needed the most tending to in my opinion was the linting and formatting story, but I chose to begin with the testing. Reason being that the formatting would bring in large changes to the project for the first time and have a robust test setup would give us confidence to merge without weeks of testing.

I had briefly written about my thoughts on how UI code must be tested here: https://rohanbagchi.hashnode.dev/testing-react-components

Here is the present scenario:

  1. It is a React and Typescript application that uses antd and redux toolkit underneath

  2. Data fetching is done with useEffect and axios

  3. Built using CRACO (which is a wrapper on top of CRA)

  4. Jest with react testing library is being used with a lot of mocks (including axios and redux itself)

As we can see, this is a relatively ok setup with some gaps.

First things first, we must ensure our test target is as close to the real world as possible, and this means no more mocking axios and redux. To accomplish this, we must ensure there is a predictable way for out frontend code to make network calls. Enter mock service workers.

I added a server.ts file with the following content:

import type { RequestHandler } from 'msw';
import { setupServer } from 'msw/node';

const server = setupServer();

export const setupTestServer = (handlers: RequestHandler[] | RequestHandler) => {
  const handlersArray = Array.isArray(handlers) ? handlers : [handlers];
  server.use(...handlersArray);
};

let isServerListening = false;

beforeAll(() => {
  if (!isServerListening) {
    server.listen({ onUnhandledRequest: 'warn' });
    isServerListening = true;
  }
});

afterEach(() => {
  server.resetHandlers();
});

afterAll(() => {
  server.close();
  isServerListening = false;
});

export default server;

Now, in every new test, we would define the API calls the page needs to make immediately after the first describe like this:

beforeEach(() => {
  setupTestServer([
    rest.get('/api-path', (req, res, ctx) => {
      return res(ctx.json({ foo: "bar" }));
    })
  ]);
});

This would enable every test to consume the standard response and assert on the rendered text content. For tests that need something specific, those can define custom handlers like this:

test('some custom test', async () => {
  setupTestServer([
    rest.get('/api-path', (req, res, ctx) => {
      return res.once(
        ctx.json({ foo: "bazz" })
      );
    })
  ]);

  renderComponent();

  // other assertion logic
});

We can see a renderComponent function being called above. This is another convention we brought in where our components are rendered centrally within a test file.

Also, component rendering needed to be simplified / standardised, and for this I added a custom renderWithProviders function based on the recommendations at the redux doc: https://redux.js.org/usage/writing-tests#setting-up-a-reusable-test-render-function

We will use this common render function heavily later on as we add i18n or support for rendering components without a router.

All of them put together, here is how it looks like:

describe('Some Integration Test', () => {
  beforeEach(() => {
    setupTestServer([
      rest.get('/api-path', (req, res, ctx) => {
        return res(ctx.json({ foo: "bar" }));
      })
    ]);
  });

  const renderComponent = () => {
    renderWithProviders(
      <AppComponent />
    );
  };

  test('some test', async () => {
    renderComponent();

    expect(await screen.findByText('bar')).toBeInTheDocument();
  });
});

For large enterprise applications where the backend is a cluster of micro services, we have effectively mocked it all behind a service worker, ensuring our tests resemble a very close to real user interaction.

Now, for cases where the backend API contract has changed without notifying the frontend app, or for real backend bugs, our testing strategy will not capture them. But that was never the point of it in the first place.

Idea is to reduce the chances of a bug to slip into production with tests and as such, we have similar tests for both the backend and the frontend side of things. And for the scenarios our tests do not actively prevent, we have tooling in place to ensure we have observability (and fast rollback if need be).


If you’ve faced similar challenges or have insights to share, I’d love to hear from you. Your experiences can enrich this discussion and help others navigating similar paths.

0
Subscribe to my newsletter

Read articles from Rohan Bagchi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rohan Bagchi
Rohan Bagchi