Super Charge Your Stacks Clarity Web3 App

mike cohenmike cohen
7 min read

Single page webapps are great but when used in the blockchain context with libraries like StacksJs they can start to swell and lag as the number of routes and number of contract interactions increase. Initial page render times degrade rapidly and users are left wondering what they clicked on while looking at a blank screen or loading icon.

Clarity, being a non-Turing complete smart contract language, presents additional challenges to web3 developers since they cannot provide internal functions which loop over data structures. To access the data stored in a contract developers have a couple of approaches - looping externally from client code over the read functions provided by the contract and / or watching event streams - the events emitted by contract functions and built-in functions like stx-transfer triggered by public transactions.

In this post we address these challenges at a high level and provide suggestions using server side rendering, server side caching and API based contract indexing to transform a bloated single page app into a lightning fast platform - while overcoming some of the challenges inherent in working with blockchain technology. Amongst the benefits will be avoiding rate limiting from downstream APIs that we depend on such as Hiro Stacks API.

We'll use our BigMarket prediction market app as the patient - its currently very slow to load!

Examples use the SvelteKit framework - its a fantastic framework that generates pure html and javascript - but the discussion generalises, in a straight forward manner, to other front end frameworks.

Part 1: Server Side Rendering

Options for where and how a web application is rendered include;

  • csr - client side rendering

  • ssr - server side rendering

  • pre-rendering

Our BigMarket UI is currently fully client side rendered. Too switch this to server side rendering we need to add a special page under each route;

// src/routes/+page.server.ts
import type { PageServerLoad } from './$types';
import { getCached, setCached } from '$lib/server/cache';
import { fetchMarkets, getLeaderBoard } from '$lib/predictions/predictions';

export const load: PageServerLoad = async ({ fetch }) => {
    const markets = await fetchMarkets();
    const leaderBoard = await getLeaderBoard();
    const result = { markets, leaderBoard };
    return result;
};

which becomes available in our client side page component +page.svelte as;

export let data: {
        markets: Array<PredictionMarketCreateEvent>;
        leaderBoard: LeaderBoard;
};
const { markets, leaderBoard } = data;

There is a lot going on thats hidden in our API call; const markets = await fetchMarkets(); which we'll address soon.

Before wrapping up this section, we also need to address the deployment target environment and choose an appropriate SvelteKit adaptor. Up till now we've used @sveltejs/adapter-static which assumes a static bundle - leading to bloat and lag.

To facilitate server side rendering we need something a bit smarter on the server side of the app. SvelteKit bundles several adapter out of the box; CloudFlare, Vercel, Netlify but you can also just run the node app directly by installing the node adapter;

npm install -D @sveltejs/adapter-node

and update svelte.config.js;

import adapter from '@sveltejs/adapter-node';
...
    kit: {
        adapter: adapter(),
...

Part 2: Server Side Caching

Building on this newly SSR'd application we will add a simple in-memory caching layer. This will give us the ability to cope with high traffic rates without pushing load downstream onto our own and third party APIs like the Hiro API, avoiding rate limits as a bonus.

Using the Least Recently Used caching algorithm is effective and easy;

npm install quick-lru

First then we need the middleware;

// src/lib/server/cache
type CacheEntry<T> = {
    data: T;
    expires: number;
};

const cache = new Map<string, CacheEntry<any>>();

export function getCached<T>(key: string): T | null {
    const entry = cache.get(key);
    if (!entry) return null;
    if (Date.now() > entry.expires) {
        cache.delete(key);
        return null;
    }
    return entry.data;
}

export function setCached<T>(key: string, data: T, ttlMs: number) {
    cache.set(key, { data, expires: Date.now() + ttlMs });
}

and then adapt our SSR page code to try the cache before hitting the API

// src/routes/+page.server.ts
import type { PageServerLoad } from './$types';
import { getCached, setCached } from '$lib/server/cache';
import { fetchMarkets, getLeaderBoard } from '$lib/predictions/predictions';

export const load: PageServerLoad = async ({ fetch }) => {
    const key = 'home-page';
    const cached = getCached(key);
    if (cached) {
        //console.log('CACHE HIT: fetching markets: ', cached);
        return cached;
    }

    const markets = await fetchMarkets();
    const leaderBoard = await getLeaderBoard();
    //console.log('CACHE MISS: fetching markets: ', markets);
    //console.log('CACHE MISS: fetching leaderboard: ', leaderBoard);

    const result = { markets, leaderBoard };
    setCached(key, result, 1000 * 60 * 3); // 3 minutes
    return result;
};

One thing we really like, (after have used Java caching and ORM layers in the past) is the simplicity of managing the caching strategy and config per route. For example you can tweak the cache timeout per route by passing a custom timeout to setCached.

Part 3: APIs and Indexing

In the last section you'll have noticed we hid some stuff behind a simple function call await fetchMarkets() in this section we'll expand on this and show how to run an indexer.

Firstly it sounds like a lot of bother - particularly when Hiro provide their Chainhooks framework however there is nothing quite like self reliance and developing the skills to index data from Clarity contracts allows us to peek behind the wizards curtain.

The two types of data to be indexed from a clarity contract include;

  • events /extended/v1/contract/{contract_id}/events?limit={limit}&offset={offset}

  • data structures typically read via define-read-only

BigMarket's implementation reacts to events in real time and updates the UI via https pull requests (we’'ll show how to fold web-socket push technology into this in a later post). For example, tracing the call we looked at earlier;

const markets = await fetchMarkets();

this resolve to a BigMarket API call;

const path = `https://api.testnet.bigmarket.ai/bigmarket-api/markets`;
const response = await fetch(path);
if (response.status === 404) return [];
const res = await response.json();
return res;

which queries a database instance for the data.

The reason we don't query the chain directly is to save resources. Instead, a more efficient approach is to index the data from the event stream and then cache it locally.

In order to do this we use a cron scheduler backend to trigger a scan for new events - note we don't need to rescan for all events (unless we are building from scratch) and use the offset API call parameter to begin scanning from our most recent event;

The following code runs once every minute and uses the offset on the API call to fetch the new events;

export const initScanDaoEventsJob = cron.schedule('* * * * *', async (fireDate) => {
    try {
        ...
        for (const extension of distinctDaoContracts) {
            ...
            await readDaoExtensionEvents(false, extension);
        }
    } catch (err: any) {
        console.log('initScanDaoEventsJob: ', err);
    }
});

Of course, this approach can be improved with async messaging services like RabbitMQ, but for now we are keeping the infrastructure as bare bones as possible - following an agile methodology.

This infrastructure ensures our UI is never more than a minute behind the latest confirmed transactions.

Summary

In this post, we tackled the core performance and scalability challenges of building responsive, data-rich Clarity smart contract UIs in a blockchain context. Using our BigMarket prediction market app as a testbed, we migrated from a fully client-side rendered architecture to a faster, server-side rendered SvelteKit application — optimised for real-world Web3 demands.

We implemented:

  • Server-side rendering (SSR) to reduce initial load times and improve SEO

  • In-memory caching to reduce load on third-party APIs (like Hiro's) and improve page responsiveness under traffic

  • Custom data indexing to efficiently track Clarity contract state via event streams, freeing us from the limitations of public RPCs and read-only functions

By moving critical data fetches and rendering logic to the server and indexing Clarity events locally, we dramatically improved performance, reduced loading jank, and built a future-proof architecture that supports real-time dApp interactions without over-relying on third-party services.


🚀 Conclusion

Single-page apps are fast to prototype, but without careful attention to performance and data strategy, they don’t scale well — especially in blockchain environments where smart contracts can’t iterate over data. By combining SSR, route-aware caching, and custom indexing, we’ve shown how to supercharge your Clarity Web3 frontend with a fast, reliable user experience.

This architecture keeps us flexible and resilient, while staying close to the metal of the Stacks blockchain. You don’t need to wait on someone else’s indexer or cache to deliver performance — and that’s the power of owning your own stack.

2
Subscribe to my newsletter

Read articles from mike cohen directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

mike cohen
mike cohen