đ„ My NextJS Handbook

Table of contents
- Understanding Next.js Rendering Strategies â SSG, CSG, SSG, and ISR
- What âuse clientâ Really Does in React or Next.js
- Next.js 15 App Router â Architecture and Sequence Flow
- How to Client-Side-Render a Component in Next.js
- You must use middleware like this in Next.js
- Next.js Server Actions Lessons Learned
- When To Use React Query With Next.js Server Components
- When Should I Use Server Action (Next.js 14)
- 5 Next.js Image Pitfalls That Hurt Performance
- How I Made a Next.js App Load 10x Faster
- Set proper HTTP Headers for Caching
- Embrace Server Components to minimize client-side JavaScript
- Stream and Selectively Hydrate for Instant Interaction
- Optimize Images For Faster Loads (Use Next/Image)
- Optimize Fonts and Prevent Layout Shifts (Use Next/Font)
- Analyze and Trim Your Bundles (Bundle Analysis and Tree Shaking)
- Code-Split By Dynamic Imports (Load Only What You Need)
- Leverage Edge Functions and CDN For Low TTFB
- Pre-Render and Cache as Much as Possible (SSR, ISR, and Partial Prerendering)
- Monitor Performance Continuously with the Right Tools
- Build a Performance-First Mindset (Apply to all types of apps)
- Conclusion
- References

Next.js is a React-based framework that allows you to build server-side-rendered applications with ease. With Next.js, you can create dynamic and fast-loading web pages that are optimized for search engines and social media platforms. Some of the key benefits of using Next.js include:
Automatic code splitting for faster page loads.
Server-side rendering for improved SEO and performance.
Built-in support for CSS modules and styled components.
Easy deployment with Vercel, the platform that was built specifically for Next.js.
Today, Iâm going to explain some advanced concepts of Next.js that most developers donât know. You can use them to optimize your App and improve the Developer experience.
Understanding Next.js Rendering Strategies â SSG, CSG, SSG, and ISR
One of the advantages of using Next.js is the versatility in regard to how it builds the page on the userâs browser. If you can comprehend how these strategies work, you will have an easier time building a faster and more efficient site.
Next.js has four rendering strategies:
Server-Side Rendering (SSR)
Client-Side Rendering (CSR)
Static-Site Generation (SSG)
Incremental-Statis Regeneration (ISR)
I will explain each strategy, including the process behind the scenes, these use cases, and the pros and cons.
Server-side Rendering
How it works:
The Js file is sent to the server
When the user (browser) requests, the server will run the
getServerSideProps()
function (Next 12) orfetch(âhttps://...', { cache: âno-storeâ });
(Next 13)After the data is fetched, it will be built on the server (including the data from the API)
The server will send the HTML to the userâs browser.
Use cases:
if your site data is updated frequently
At the same time, SEO is an essential factor as well
Pros
No loading
Real-time data
Good for SEO
It can be used for personalized content
Cons
The user needs to wait for the HTML to be built on the server side
Too much burden on the server => every userâs request, the server to be rebuilt.
Client-side Rendering
How it works
Build Process => The HTML is sent to the server
When the user requests, the server sends the HTML file, and then the client (browser) requests the data from the API server.
At the time the client is requesting the data from the API server, the browser will display the loading state (in most cases)
After the data is fetched from the API server, the loading state will be turned off, and the screen will be updated with the data from the API
Use case:
If your siteâs data is updated frequently.
SEO is not an essential factor.
Pros
Real-time data
It can be used to personalize content
The burden is not too big for the server
Cons
There will be a loading state
Not good for SEO => Since HTML is built from the server, it does not include the data from the API
Static-Site Generation (SSG)
How it works:
The build and fetch data processes happen at the same time =>
getStaticProps()
function (Next 12) orfetch()
function (itâs set as default in Next 13) is running at build timeAfter the HTML + JSON is built (the data from the API is included), it will be sent to the server.
When the user (browser) requests, the server will send the HTML + JSON file, so the user doesn't need to wait (no loading).
Use case:
if your site data is definite (fetch one, and thatâs it)
Example: site for Al-Qurâan, logs/history data, or old archived documents, which wonât be changed no matter what
Pros
Overall, the fastest method
No loading
The burden is not too big for the server
Good for SEO
Cons
Donât have the trigger to update the data from API(fetch once, and thatâs it) unless itâs redeployed
It canât be used as personalized content => because this method doesn't have any way to update the built HTML file from the server (so whenever the user requests, it will remain the same)
Incremental Static Regeneration (ISR)
How it works:
Pretty much the same with SSG, but with the capability to update the data
If you set the revalidated data on the
getStaticProps()
function (Next 12) orfetch(â
https://.../data', { next: { rev
alidate: 10 } });
(Next 13), The server will revalidate the data from the API and see whether there is any change.If there is any update from the API (DB), then the built HTML file will be updated (the new one will override the current one)
But it will only be updated after the set time, in the revalidate props time is passed
Example: If you setrevalidate: 10
=> It will do the revalidation and update the HTML after 10 seconds have passed.
Use case:
- Best practice for most static sites that donât need a real-time update
Pros
- Basically, itâs SSG (fast + good for SEO!) but with an additional tweak to be able to update the siteâs data
Cons
It canât be used as personalized content.
It canât be used if you have a real-time feature on your site.
What âuse clientâ Really Does in React or Next.js
Reactâs âuse clientâ directive might look like an annotation at the top of your file, but it presents a profound shift in how we structure applications. Ever since Next.js 13 introduced a new app router and React Server Components, developers have been grappling with these two-word directives. On the surface, âuse clientâ marks a component to run on the browser. Under the hood, however, it opens a gateway between the server and client environments in a way thatâs both elegant and technically sophisticated. In fact, React core team member Dan Abramov argues that the invention of use client
(and its counterparts, âuse serverâ) It is as fundamental as the introduction of async/await or even structured programming itself. That is a bold claim for the little string at the top of the file. So, what does âuse clientâ really do? And, why is it so important for the future of React?
From Server to Client: Bringing Two Worlds
To understand the meaning of âuse clientâ, it helps to consider the context in which it emerged. In Next.js 13 App Router, components are server-first by default. This means if you write a component without any directives, Next.js will render it on the server (producing static HTML) and send that HTML to the browser without any client-side JavaScript for that component. This is great for performance. Your pages can load with minimal JS, but it poses challenges when you do need interactivity or state. How do we tell React that a certain component (say, a counter button or a dynamic form) needs to be interactive and runs on the browser? Thatâs exactly what âuse clientâis for.
When you add âuse clientâ to the top of the file (above any imports), you are declaring that this module and everything it imports should be treated as a Client Component, meaning it will execute on the client side and can use interactive features like state, effects, and browser APIs. In essence, âuse clientâ draws a boundary line in your appâs module graph, on one side of that line, components run on the server; on the other side, components run on the client. This directive flips the historical default. In traditional React apps (and in Next.jsâs old Page Router), every component was a client-side component by default, and you opted into server rendering. Now, with server components, we default to running on the server and explicitly opt into the client side for interactive parts.
Crucially, âuse clientâ is more than a marker for âput this code in the browserâ. It serves as a bridge between two environments. A way for the server to include client-run code in the appâs output in a controlled, declarative manner. Dan Abramov describes âuse clientâ as essentially a typed <script> tag. Just as a script tag in HTML tells the browser to execute some bundled JavaScript, âuse clientâ tells Reactâs tooling that âThis module is UI code that the browser needsâ. The server can import that module and hand off rendering to it, much like opening the door from the server world into the client world. In other words, âuse clientâ allows the server to reach into the client bundle and say, âI need this component to come alive in the browser.â. Itâs a formal, first-class way to intertwine server-rendered content with client-side interactivity.
How does âuse clientâwork under the hood?
The technical mechanics of âuse clientâ are fascinating. When you mark a module with âuse clientâ, you are signaling to the build system and to React that this file (and its dependencies) belong in the client bundle. If a Server Component tries to import something from that file, the server wonât import the componentâs implementation directly. Instead, it imports a stub or reference to it. Think of it like a placeholder or a token that stands in for the real component. The server-rendered output will include a pointer to that client component rather than the componentâs HTML, indicating, âthere is a client component here, which will be rendered on the client sideâ. Reactâs server component payload (often a special JSON behind the scenes) might include an identifier for the component, such as a module path and export name. For example, the server output could contain something like:
{
"type": "/src/frontend.js#LikeButton",
"props": { "postId": 42, "likeCount": 8, "isLiked": true }
}
This is not literal HTML, but a description. It says there should be a <LikeButton>
there with those props, and it references the component by module (/src/frontend.js
) and name (LikeButton
)â. The React runtime uses this to generate actual script tags for the browser. When the response reaches the client, the framework knows it needs to load the /src/frontend.js
module (the file where LikeButton
is defined) as a separate JavaScript chunk. It injects a <script src="frontend.js"></script>
for that file, and once loaded, it hydrates the component by calling LikeButton({...props})
on the client. In essence, the âuse clientâ directive allows the server to embed a reference to a client-side component in its output, and that reference is resolved into a real interactive UI in the browser.
One important nuance is that making a component with âuse clientâ directive does not meant it wonât be rendered on the server at all. In fact, Next.js will still pre-render the initial HTML for client components in many cases (just like it did in the old pages architecture) and then hydrate them on the client. The âuse clientâ directive simply ensures that the componentâs JavaScript is sent to the browser and React knows to hydrate it. That means you donât lose the SEO and performance benefits of server-side rendering by using a Client Components; you are just opting into sending additional JS for interactivity. A common rookie mistake is thinking that adding âuse clientâ makes your entire page purely client-side rendered. In reality, a Client Component is Next.js 13+ is usually still rendered to HTML on the server first, then made interactive on the client, which is exactly how React pages have traditionally worked. This big difference is that now you have a choice, part of the page with no âuse clientâstays purely server-rendered (no hydration needed at all), and parts with âuse clientâ get that two-step treatment of SSR + hydration.
Because of how the boundaries work, you typically only need to put âuse clientâ at the top of entry points for interactive islands of your application. Once you mark a component as a Client Component, all of its children and imports automatically become part of the client bundle as well. You do not need to âuse clientâ on every file that contains a hook or browser API call. For example, if you create a Counter.tsx
component with 'use client'
(so it can use useState
and handle clicks) and then import it into a parent server-rendered page, so that Counter and anything it imports will be bundled for the client. If that Counter itself renders other components(passed in as children or imported within it), those can actually be server components if they donât need interactivity. React will seamlessly render those on the server and slot their HTML into the clientâs component output before hydration. This flexibility can be mind-bending. You can have a Server Component inside a Client Component, which is inside a Server Component, and so on. The frameworkâs job is to sort out which parts run where. As developers, our job is just to label the boundaries correctly. And thanks to 'use client'
, those boundaries are explicit and easy to reason about.
Why use client
Matters (More Than You Might Think)
The introduction of 'use client'
has significant implications for how we architect React applications, especially in frameworks like Next.js. First and foremost, it enables fine-grained performance optimization. By defaulting everything to server-rendered and then opting specific pieces into client-side hydration, we send far less JavaScript to the browser than a traditional SPA would. A page that might have previously bundled the logic of every component can now ship only the code for truly interactive parts. This âeat your cake and have it tooâ approach, full server rendering for most of the UI, and rich interactivity where needed, is essentially an implementation of the elusive ideal of progressive hydration or the so-called âislands architecture.â You can think of each 'use client'
component as an island of interactivity amid a sea of purely server-rendered HTML. If a part of your UI doesnât need interactivity, simply leave out the directive and it remains an island of static content (no hydration overhead)â. This leads to better loading performance and less JavaScript bloat on the client. Next.js 13+âs architecture actively encourages this. It makes you consciously add 'use client'
only where necessary, nudging you into keeping most of your UI logic on the server by default.
Second, 'use client'
improves the developer experience and code maintainability in a full-stack React app. In the past, to make a client-side interactive widget that also fetched or updated data on the server, you had to write a lot of boilerplate. Define an API route or endpoint, call fetch
from the client, handle state for loading or errors, and so on. Now, consider the new world with Server and Client Components. The server can render a component and pass it data directly as props, and the client component can, in turn, directly call back to server functions (using 'use server'
, which goes hand-in-hand with 'use client'
). In Dan Abramovâs Like button example, instead of manually writing API endpoints for âlikeâ and âunlikeâ and then writing client code to fetch those, you can simply write a server function likePost
and import it into your client component with 'use server'
. React will handle turning that into an API call for you. On the flip side, you write a LikeButton
component with 'use client'
and import it into your server-rendered UI; React will handle sending that componentâs code to the browser and hydrating itâ. The connection is expressed through the module system (via import/export), not through ad-hoc API contracts. This means your editor and type system can understand the relationship. You can navigate to definitions, get type checking across the boundary, and treat the clientâserver interaction as a function call rather than a network call. As Abramov puts it, the 'use client'
import âexpresses a direct connection within the module systemâ between the part of the program that sends the <script>
(server) and the part that lives inside that script (client), making it fully visible to tools and type-checkers. In practical terms, this can reduce bugs and make code more discoverable compared to the old way of string-typed API endpoints.
Using 'use client'
also forces a clearer separation of concerns between your purely presentational/server-driven components and your interactive ones. In a large codebase, this can be a healthy discipline. You might designate most of a page (navigation bars, content sections, data displays) as server-rendered and free of client-side logic, and only sprinkle a few 'use client'
components for things like forms, modals, or widgets that truly need it. Those client components can still leverage server-side data by receiving props or calling server actions, but they wonât inadvertently drag the entire pageâs code into the client bundle. Many developers, upon first migrating to Next 13+, felt it was annoying to add 'use client'
everywhere they used hooks. But this âannoyanceâ is intentional. It makes you stop and consider âDoes this code really need to run on the client?â. If not, perhaps it could be refactored to a server component, leaving just a tiny client component for the interactive bit. In time, teams find that this leads to smaller, more purpose-driven client modules and a more robust rendering strategy. Itâs a new mental model, but one that aligns with the performance needs of modern apps.
One caution, because 'use client'
scopes an entire module to the client, you do have to be mindful about what you import inside a client module. Anything you import into a 'use client'
file becomes part of the client-side bundle (unless itâs a purely type import or something that gets compiled away). This means you wouldnât want to import a Node-only library or a huge server-only module inside a client component. It either wonât work (if it relies on Node APIs) or it will bloat your bundle. Nextâs compiler will usually warn or error if you try to import server-only code into a client module. In short, keep client components focused and lean. Use them for UI and interactivity, not heavy data fetching or processing (those belong on the server side). Fortunately, the system makes this natural: heavy data fetching is easier to do in Server Components, and they can feed the results into Client Components as props. The end result is an app that is modularized by environment**,** server logic and rendering over here, client logic and interaction over there, both living in the same codebase but clearly delineated.
The Future of 'use client'
and the React Ecosystem
Itâs early days for React Server Components and the 'use client'
directive, but the impact is already being felt. As of Next.js 13+ (and the evolving React 18+ ecosystem), weâre seeing a rethinking of how UI and backend logic intermingle. The success of these directives could influence other frameworks and the broader web platform in interesting ways. Dan Abramov suggests that the ideas behind 'use client'
/'use server'
are not limited to React; they are a generic approach to distributed applications, essentially a form of RPC (remote procedure call) built into the module system. Imagine a future where your codebase seamlessly spans multiple runtimes (web browser, server, maybe even mobile or worker contexts), with the boundaries declared in the code and handled by compilers and bundlers. The React team expects these patterns to âsurvive past React and become common senseâ in web development. Itâs a bold vision, a world where sending code to the client or calling into the server is as straightforward as calling a function, with tools taking care of the messy details of networking and serialization.
In practical terms, the ecosystem is already adapting. Libraries that provide React components are starting to consider how theyâll work in a Server Components world. For example, a date picker or charting library might mark its components with 'use client'
so that if you use them in a Next 13+ app, the libraryâs code is correctly included on the client side. Tooling is also improving. Since these directives are just string literals, they rely on build tooling to do the right thing. We might see better ESLint rules or even language support to catch mistakes like forgetting to add 'use client'
when needed, or conversely, adding it unnecessarily. Thereâs active discussion in the community about how to make the developer experience smoother. Could future React versions infer 'use client'
automatically for certain components based on usage of hooks? Possibly, though the React team seems to prefer explicit boundaries for now, as automation might be error-prone. Whatâs more likely is continued guidance and patterns for structuring apps. Over time, using 'use client'
may feel as natural as using useState
, just another part of Reactâs vocabulary.
We should also watch for how other frameworks respond. The idea of partial hydration and islands of interactivity isnât unique to React. Frameworks like Astro, Qwik, and Marko have been exploring similar territory, each with their own spin. Reactâs approach with 'use client'
and 'use server'
is distinctive in that it integrates deeply with JavaScript modules and bundlers, rather than introducing a completely new DSL. This means it could be adopted beyond React if standardized, for instance, a future build tool could allow any JavaScript project to designate certain modules for the client or server environment using similar directives. Itâs not hard to imagine the concept spreading: the benefits of clarity, performance, and type safety at the boundary are not something any full-stack developer would want to pass up. On the other hand, Reactâs solution is opinionated: it assumes a single unified project that produces both server and client artifacts, which fits frameworks like Next.js perfectly. Not every project will have that shape, so there will continue to be alternatives and variations.
In the short term, we can expect the React community to establish best practices around 'use client'
. Already, the recommendation is to use it sparingly and purposefully. The ideal React Server Components app uses 'use client'
only components that truly need it, and sometimes that means writing a small wrapper component just to hold some client state or effect while the rest stays on the server. This granularity might feel like extra work, but it pays off in load performance and gives you a clearer understanding of your appâs runtime behavior. Thereâs also an educational aspect: understanding 'use client'
inevitably means understanding how the clientâserver continuum works in a React app, which makes one a better full-stack developer. It forces you to confront where the state lives, where data comes from, and what code runs where. Those who embrace this mindset are likely to build apps that scale better and are easier to debug across environments.
The Yin-Yang of modern React architecture. Many have begun to view their React apps as a yin-yang symbol of server and client, two complementary halves of a single whole. The 'use client'
and 'use server'
directives are the two gates that let data and code flow between these halves in a controlled way, each gate opening in one directionâ. 'use server'
lets the client safely invoke server-side functions (essentially turning an import into a network call), while 'use client'
lets the server include interactive client-side UI (turning an import into a script reference). Together, they allow âseamless composition across the networkââ, meaning you can build features that feel like one cohesive program even though under the hood they involve browser code talking to server code. Itâs a powerful illusion that improves practicality rather than just abstracting it away: you still know which parts run where, but you no longer have to hand-stitch the plumbing every time you cross the boundary.
In conclusion, 'use client'
is far more than a mere hint to âdo this in the browser.â It is a cornerstone of Reactâs new architecture, enabling a new level of integration between server and client logic while preserving performance and clarity. Its importance will only grow as more of the React ecosystem adopts Server Components. Yes, it requires learning a new way of thinking about React, one where you occasionally have to pop open a different mental toolbox for client versus server concerns, but the payoff is an application that can be both highly performant and richly interactive. For busy developers working on complex apps, 'use client'
offers a way to write code that is simultaneously efficient and expressive, bridging worlds that used to be separate. As we continue to refine these patterns, itâs likely that in a few years using 'use client'
(and 'use server'
) will feel as natural as writing an async function. Itâs a small change with big implications, and itâs pointing the way toward a future in which the line between front-end and back-end code is blurred by design, not by accident. In that future, the phrase âfull-stack developerâ might take on a more literal meaning, and 'use client'
will have been one of the keys that opened the door.
Next.js 15 App Router â Architecture and Sequence Flow
Overview of Server vs. Client Components
Next.js App Router leverages React Server Component (RSC) by default for improved performance. By default, all page
and layout files
are server components, meaning they render on the server and their code is not sent to the client. Client Components (marked with the âuse clientâ directive) are used only when interactivity, state, or browser APIs are needed. In practice, a Server Component can include and import a Client Component (to add interactive parts), but not vice versa. This allows a single pageâs component tree to interleave server-rendered UI with interactive client-side widgets. By pushing as much UI as possible into Server Components, Next.js reduces the amount of JavaScript that must hydrate on the client, improving performance.
Key Characteristics:
Server Components: Render on the server only (never in the browser), can safely access databases, secrets, and perform data fetching with
await
(e.g.await fetch(...)
) directly in the component code. They are never hydrated on the client and do not include React state or event handlers (they output static HTML).Client Components: Render on the server and then hydrate on the client. These are needed for any stateful or interactive UI (hooks like
useState
, event handlers, browser-only APIs likewindow
orlocalStorage
). A file with'use client'
at the top is treated (and all of its imports) as part of the client-side bundle. During the initial page load, Client Components are still server-side rendered to HTML (for faster first paint), but afterwards their JS code runs in the browser to handle interactivity.
Composition: Server and Client components can be mixed. For example, a Server Component page might import a <Navbar>
that is mostly server-rendered but includes a <SearchBar>
marked as a Client Component for interactivity. React will render the Server Components to an RSC Payload (a serialized representation), including placeholders for any Client components. The Client Componentâs actual HTML will be injected on the client side during hydration. This lets heavy lifting (data fetching, markup generation) occur on the server, and interactive pieces will be added on the client without a full re-render of the entire page.
App Router Structure: Layouts, Templates, and Pages
Next.js organizes routes in /app
directory using nested folders. Each folder can contain special files that define the UI for that route segment. The primary ones are: Layout, Template, and Page.
Layouts (
layout.tsx
): A Layout is a wrapper UI that persists across pages. Layouts are defined at any route segment and apply to all pages under that segment. On navigation, layouts do not unmount or re-run; they preserve state and remain interactive without rerendering. This means if you have stateful Client Components in a layout (e.g. a sidebar or header), they wonât reset when moving between child pages. Layout files are hierarchical â a child segmentâs layout is nested inside its parent layout. The top-levelapp/layout.tsx
is the Root Layout (required, must include<html>
and<body>
tags) wrapping the entire app.Templates (
template.tsx
): A Template is very similar to a layout in structure (wraps children segments), but does not persist state across navigations. Instead, a template re-mounts afresh for each navigation even if you stay in the same segment. In effect, itâs a âre-rendered layoutâ used when you want certain parent UI to reset or run again on each page change. For example, use a Template for an animation or to reset scroll position or state whenever the user navigates between sibling pages. According to Next.js conventions, âtemplateâ files are rendered after layouts and before the page component, and a new instance is created when navigating between pages using that same template. (In contrast, layouts âalways precede templatesâ and remain mounted.) Use-case tip: Use Layouts by default; use a Template only if you need to reset state or re-run effects on navigation.Pages (
page.tsx
): A Page is the leaf component for a route â it defines the content for a specific URL. Pages are always Server Components (unless explicitly made client) and can be asynchronous (e.g. toawait fetch
data). They are rendered as children of the nearest Layout/Template wrappers. Each folder typically contains a singlepage.tsx
(except for dynamic routes). The page componentâs output is what ultimately gets rendered inside all the surrounding layouts/templates for that route.
How they compose: When a user navigates to a URL, Next.js matches a chain of layouts/templates down to the page. For example, consider a route /dashboard/profile
with the following structur
app/
ââ layout.tsx (Root Layout â e.g. site chrome)
ââ dashboard/
ââ layout.tsx (Dashboard Layout â persists for all /dashboard/* pages)
ââ template.tsx (Dashboard Template â re-renders on each navigation under /dashboard)
ââ page.tsx (Dashboard index page, e.g. /dashboard)
ââ profile/
ââ page.tsx (Profile Page, at route /dashboard/profile)
When loading /dashboard/profile
, Next.js will:
Render
app/layout.tsx
(root layout) at the top,Inside it, render
app/dashboard/layout.tsx
,Then render
app/dashboard/template.tsx
(its output wrapping the page),Finally render the
app/dashboard/profile/page.tsx
content inside those wrappers.
The Root Layout might include the global HTML structure and navigation; the Dashboard layout might include the sidebar that remains persistent; the Template layout could ensure that navigating between sub-pages (Dashboard index and Profile) resets certain state; and the Profile page provides the main page content. Next.js does this assembly automatically based on the folder structure. Notably, layouts render in parallel with pages, meaning the server doesn't block pages until the parent layout is done â this improves performance by avoiding sequential waterfalls.
Initial Page Load: Request-Response Sequence
On the first load of a page (or a direct visit URL), the sequence involves both server-side rendering and client-side hydration:
Browser -> Next.js Server: The browser requests a page (say
/dashboard/profile
). This request hits the Next.js server (Node.js or Edge runtime). The App Router locates the matching route segments and loads the corresponding components: all required Layout(s), Template(s), and the Page component for/dashboard/profile
.Server Rendering with RSC: Next.js performs an SSR render using Reactâs server rendering pipeline. This happens in two phases:
Render to RSC Payload: React runs through the Server Components (layouts, page, and any nested Server Components) to produce a React Server Component Payload (RSC payload). The RSC payload is a serialized binary format containing the rendered output of server components, plus placeholders for Client Components and references to their JS bundles. Essentially, itâs a description of the UI: HTML for server-rendered parts, and instructions for where client-rendered parts go.
Generate HTML: Next.js then uses the RSC payload results along with the known client component boundaries to assemble the HTML for the response. Server Componentsâ output becomes HTML content, whereas each Client Component is left as a lightweight placeholder (often an empty container or loading hint) in the HTML. This HTML can be streamed to the browser (enabled by React Suspense boundaries), allowing the user to see partially rendered content sooner without waiting for all data to finish. At this stage, the server also includes scripts/tags to send the RSC payload to the client (for example, in a
<script type="application/json">
or streaming over a network channel) so that the client-side React can pick it up.
3. Server Response: The Next.js server returns the initial HTML along with the RSC payload (and the necessary JS bundles references for any client components). The HTML already contains the fully rendered UI of all Server Components (e.g. text, markup, etc.), so the user sees meaningful content on first paint. This HTML, however, is non-interactive at first â any buttons or client-side UI controls wonât yet respond.
4. Browser Rendering & Static Content: The browser receives and parses the HTML. Immediately, it can display the server-rendered content. This gives a fast First Contentful Paint since no client-side code is needed yet to show the UI. At this point, the page looks complete but isnât wired up to React on the client.
5. Hydration Phase (Browser): In the background, the Next.js client runtime (hydration script) takes over. It loads the JavaScript for any Client Components that were included on the page (as referenced in the RSC payload). React on the client uses the RSC payload to reconcile the Server and Client Component trees, injecting the actual Client Component UI and state into the DOM where the placeholders were. Then hydration attaches event handlers and reactivates the interactive parts. Essentially:
The RSC payload tells React what the server output was for each component, so React can create a virtual DOM tree matching it.
For each Client Component boundary, React will load its JS module and hydrate it: attach its event listeners, initialize state, etc. (using
ReactDOM.hydrateRoot
). Hydration makes the previously static HTML âliveâ.All of this happens concurrently: while some Client Components hydrate, other parts of the page (Server Components) were already usable as static content, and any remaining streaming content can continue to load. Reactâs concurrency and Suspense allow hydration to be interleaved with any late-arriving chunks of the stream.
6. Interactive Page: Once hydration completes, the page is fully interactive. The user can now click buttons, use forms, open menus, etc. The initial load is now essentially a hydrated React app in the browser. Importantly, any purely server-rendered parts of the UI (Server Components without client logic) remain simply static DOM â they donât incur additional JS overhead on the client beyond whatâs needed to stitch them into Reactâs tree. Only the designated Client Components carry a client-side cost.
Client-Side Navigation Flow (App Router)
After the initial load, navigating between pages is typically done via client-side transitions (using <Link>
or router APIs) to avoid full page reloads. The Next.js App Router handles these subsequent navigations efficiently:
When the user clicks a Next
<Link>
to another route (e.g. from/dashboard
to/dashboard/profile
), the browser does not perform a traditional page refresh. Instead, the Next.js client intercepts the click and triggers a fetch to the server for the new routeâs data. Specifically, Next will request the RSC payload for the new route (this is often an HTTP call to an internal API endpoint that returns the React Server Component payload for that page).The Next.js Router on the client keeps a cache of previously fetched RSC payloads (the Router Cache). If the new route was preloaded or visited before, its RSC payload might already be cached, enabling near-instant navigation. (Next.js by default prefetches routes in the background when
<Link>
is in viewport, caching their RSC payload.)The server generates the RSC payload for the new page (just like in initial load, but usually without needing to resend full HTML). This payload describes the portions of the UI that change. Because layouts can persist, Next.js will reuse any parent layout components that are common between the current page and the next page, and only fetch/render the segments that differ. For example, navigating between
/dashboard
and/dashboard/profile
uses the sameapp/layout.tsx
andapp/dashboard/layout.tsx
; those layouts stay mounted on the client. The server may only need to send the RSC payload for theprofile/page.tsx
content (and maybe a template, if present).The browser receives the new RSC payload (as JSON or binary data). React then merges the new server-rendered content into the existing DOM. This is done by computing the differences between the current UI and the new one, based on the RSC payload. React will update the DOM to reflect the new page â injecting, updating, or removing elements as needed. Crucially, this happens without unloading the JavaScript environment: the React app remains running, so any state in persistent layouts or already-mounted client components can be preserved.
Any new Client Components required by the navigation will be loaded and hydrated as part of this process. Since no full page reload occurred, the already-mounted Client Components in parent layouts remain live (they do not re-mount). Any Client Components that are no longer needed (from the previous page) will be unmounted, and new ones will be initialized.
The result is a seamless SPA-like transition. Next.js also supports streaming in new content during navigation: you can use React Suspense boundaries with
loading.tsx
in App Router to show a fallback UI while waiting for the new content to load. The RSC payload can stream, so pieces of the new page can progressively fill in. This provides a smooth UX for navigation, even if some data is slightly delayed.
In summary, subsequent navigations fetch and apply an RSC payload instead of a full document, using cached data when possible. According to Next.js: *âOn subsequent navigations, the RSC Payload is prefetched and cached for instant navigation, and Client Components are rendered entirely on the client, without the server-rendered HTML.â*This means after first load, pages update via client-side React rather than a full SSR roundtrip (though the server still provides fresh data through RSC). The App Router intelligently preserves layout state (thanks to layouts not unmounting) and only changes whatâs necessary, enabling fast transitions.
Data Fetching and Built-In Optimizations
Next.js v15 provides powerful built-in data fetching mechanisms that integrate with the RSC architecture:
Async Server Components with
fetch
: In the App Router, you can fetch data directly inside a Server Component by making the componentasync
and using the Webfetch()
API (or any async call). For example, you canawait fetch('https://...')
at the top of apage.tsx
component to retrieve data on the server. This removes the need for separate data fetching methods (like getServerSideProps) â data is co-located with the component. Next.js extends thefetch
API to improve performance: by default, fetch calls are automatically cached and deduplicated during the rendering process. If the same request URL is called multiple times in a single request (say in a layout and a page), Next.js will perform it once and reuse the result, avoiding duplicate work. This is often called request memoization â React 18 handles it under the hood for GET requests. By default, Next.js will cache fetch responses indefinitely when possible (during build for static props, or in-memory on the server between requests if not opted out). You can customize this with fetch options:fetch(url, { cache: 'force-cache' })
: uses Next.js Data Cache â serve cached data if available (fresh) or fetch and then cache it.fetch(url, { cache: 'no-store' })
: always fetch fresh data (no caching).fetch(url, { next: { revalidate: 10 } })
: set a time-based revalidation (stale-while-revalidate) in seconds. This controls how long a cached response is considered fresh. Settingrevalidate: 0
is equivalent tono-store
(no caching).By using these options or environment (development vs production), Next.js allows both static caching (during build) and dynamic data fetching where needed. The default for most Server Components is to statically pre-render and cache the data on first request, unless you mark it dynamic. (Good to know: In development, caching is usually disabled so that fetch always runs, to ease debugging.)
React.cache()
utility: Not all data fetching goes throughfetch
. If you are querying a database via an ORM, or using a third-party SDK (which might not use fetch under the hood), you can still benefit from deduplication. React provides acache()
function (in React 18+) to memoize any async function on a per-request basis. For instance, you can wrap your DB query function:const getProducts = cache(async () => db.findAll())
. When called multiple times during the rendering of a single page, the cached version ensures the actual operation runs only once. Next.js recommends usingfetch
(which is auto-memoized) when possible, orcache()
for custom data functions, to avoid duplicate data fetching in layouts and pages. This fits the App Routerâs pattern of fetching in multiple components without lifting all data up to a single load â you simply fetch where needed and trust the framework to avoid unnecessary network calls.server-only
andclient-only
: Next.js provides special packages to enforce separation of concerns. If you have a module that should only run on the server (e.g. it contains secret keys or Node-only code), you can import'server-only'
at the top of it. This will cause Next to throw a build error if that module is ever imported into a Client Component (preventing accidental leakage of server code to the client). Conversely, a'client-only'
package exists to mark modules that should only run in the browser (e.g. rely onwindow
). These safeguards help catch mistakes where you might import something like a database client into a Client Component. Next.js already automatically strips out most server-only code from client bundles (e.g.process.env
withoutNEXT_PUBLIC
will be an empty string on client), but usingserver-only
gives an explicit guarantee and clearer error messages.Reactâs
use()
Hook for Streaming: A new addition as of React 18/Next 15 is theuse()
hook, which allows a Client Component to consume an async resource (like a Promise) directly during rendering. Next.js uses this to enable streaming SSR with partial hydration. The typical pattern is: a Server Component kicks off a data fetch without awaiting it, and passes the Promise down to a Client Component via a prop. The Client Component, being wrapped in a<Suspense>
boundary, callsuse(promise)
to read the result of that async call. React will suspend rendering of that Client Component until the promise resolves, allowing the server to stream the rest of the content and the client to show a fallback UI. Once the promise resolves (data is ready), the Client Component will hydrate with the real data. This mechanism essentially lets you split data fetching: fetch on the server, but defer consuming it to the client at render-time, which is useful for cases where you need client-side interactivity with server-fetched data. Itâs an advanced technique, but itâs built-in with the App Router (no need for state management libraries just to bridge server->client data).Progressive Hydration and Streaming: The App Routerâs architecture inherently supports streaming and selective hydration. Using
<Suspense>
boundaries and special files likeloading.tsx
, you can create skeleton UIs or spinners that show instantly while deeper parts of the page load data. Next.js will stream HTML in chunks for each Suspense boundary that resolves, and hydrate components as they arrive. Hydration is also selective: only Client Components need hydration, and they can hydrate independently. For example, a slow, large Client Component can be wrapped in Suspense so that other interactive components on the page hydrate sooner, without waiting for the slow one. This fine-grained control is a direct benefit of RSC and the App Router.
Terminology Mapping
Next.js v15 App Router introduces a paradigm where the server is deeply involved in rendering React components, while the client takes on hydration and navigation responsibilities. The above architecture can be summarized by the flow of data and control:
Browser -> Server: Requests a route; Next.js server constructs the React tree (Layouts, Templates, Page) and renders to HTML + RSC payload.
Server -> Browser: Sends down HTML (UI markup) and RSC payload (serialized component tree data) in the response.
Browser (React client): Immediately displays HTML, then uses the RSC payload to load/hydrate Client Components and attach event handlers (hydration).
User Interaction / Subsequent Route Change: Next.js fetches new RSC payload (using built-in caching for speed) and updates the client-side React state, reusing persistent Layouts and rehydrating new parts. No full page reload occurs.
All of this is achieved with no third-party state libraries or client-side routers â just Next.jsâs built-in capabilities and Reactâs advancements. By understanding the roles of Server Components vs Client Components, and how Layouts persist while Templates can reset state on navigation, you can architect a Next.js app that is both high-performance (minimal client JS, efficient data fetching) and highly dynamic (rich interactions via hydration). Next.js v15âs App Router provides a robust foundation out of the box to handle routing, data fetching, caching, and rendering in a unified, developer-friendly way.
How to Client-Side-Render a Component in Next.js
Whether something in Next.js is client-side or server-side rendered can be easily determined. We will work with this simple component:
<>
<p>Hello World!</p>
</>
Using Next.jsâ default server-side rendering, the generated HTML looks like this:
You can see the componentâs XML is rendered on the server.
Meanwhile, using client-side rendering, the initial HTML response looks like this:
Of course, the âHello World!â still appears in the DOM, but it takes some time because it is rendered through client-side JavaScript.
Here are 3 ways to achieve CSR in Next.js:
Method 1: Timing with useEffect
If you use Next.jsâs new App Router, every component is a server component by default and canât use React hooks. Therefore, we declare it as a client component on top by stating âuse clientâ
.
'use client'
import { useEffect, useState } from 'react'
export default function Index() {
const [isMounted, setIsMounted] = useState(false)
useEffect(() => {
setIsMounted(true)
}, [])
if (!isMounted) {
return <p>loading...</p>
}
return (
<>
<p>Hello World!</p>
</>
)
}
Method 2: Dynamic components
Using a dynamic import, a component can also be rendered only on the client side. The reason is simple: The component is imported once the wrapping component is rendered; therefore, all the work is happening on the client.
import dynamic from 'next/dynamic'
const HelloWorld = dynamic(() => import('../components/HelloWorld'), {
ssr: false,
})
export default function Index() {
return <HelloWorld />
}
This approach works both for clients and for server-side components.
Method 3: Use the window object (Only Pages Router)
Hint: This approach doesnât work in the new App Router
This trick is so simple: When we render something on the server, the window object isnât available in our code. Why? Well, because the window object is exclusive to the browser, of course.
Since we can access this special object in Next.js, we check if we are on the server or not. Therefore, we can save if server-side rendering is happening in a variable like this:
const SSR = typeof window === 'undefined'
SSR is true if server-side rendering of our JSX is happening. To only client-side render something, use this variable:
const SSR = typeof window == 'undefined'
export default function Index() {
return <>{!SSR ? <p>Hello World!</p> : null}</>
}
You must use middleware like this in Next.js
Middleware in Next.js is often overlooked, but itâs a game-changer once you understand its potential. If you are not using it yet, you might be missing out on one of the most powerful tools Next.js has to offer.
What is middleware in Next.js?
Letâs break down the essentials of middleware in Next.js:
Itâs simply a function - at its core, middleware is just a function
Executed on the edge - middleware runs on the edge, closer to the user
Runs for specified pages - you decide which pages it affects through middlewareâs configuration.
Executes before page load - middleware runs before the user receives the page.
Takes in the request - It accepts the pageâs GET request object as a parameter.
Middleware doesnât impact the rendering pattern, and most importantly, it doesn't interfere with the pageâs rendering approach.
This image perfectly illustrates the concept of middleware from a networking perspective.
The problem
Imagine you have a âMy Profileâ page that should only be accessible to authenticated users. If someone isnât logged in, we want to redirect them to the Sign-In page.
How do we handle this? Checking for a token in the local storage or a NextAuth session on the client side might seem straightforward, but hereâs the catch:
User experience suffers â An authenticated user has to wait for the page to load and the client-side code to execute before being redirected. During this delay, they see a loading state, which is less than ideal. Even authenticated users end up seeing this loading state unnecessarily, leading to a frustrating user experience.
Bundling and efficiency issues â Using client-side checks means adding extra code to all private pages. This approach can work in simple applications, but becomes inefficient if additional libraries are required, resulting in a larger bundle size.
Alternatively, you might consider handling the session check on the server side. While this can streamline the process, it forces every protected page to become dynamic, which presents a challenge if that is not your goal.
Now, imagine if there were a way to detect user authentication as soon as they request the page and redirect them to the sign-in page without impacting the rendering pattern. Thatâs where an efficient solution comes into play.
Once I discovered the power of middleware, I immediately integrated it into my project. But before diving into the code, let me clarify a few key points. The NextAuth library conveniently places all relevant information from the token directly into the req
object, allowing us to access it effortlessly via req.nextauth.token
. This means I can extract not only authentication details but also user roles and authentication statuses.
Now, letâs take a look at how this all comes together:
import { withAuth } from "next-auth/middleware";
const authRoutes = [AppRoutes.signIn, AppRoutes.signUp, AppRoutes.forgotPassword]
export default withAuth(
async function middleware(req) {
const user = req.nextauth.token?.user
const isSigned = Boolean(user)
const isResetPasswordRoute = req.nextUrl.pathname.startsWith(
AppRoutes.resetPassword("")
)
const isAuthRoute = authRoutes.includes(req.nextUrl.pathname) || isResetPasswordRoute
const isAdminRoute = req.nextUrl.pathname.startsWith("/admin")
const isSubscriptionPlansRoute = req.nextUrl.pathname === AppRoutes.subscriptionPlans
const isAdminUser = req.nextauth.token?.user.role === USER_ROLE.ADMIN
const accountStatus = req.nextauth.token?.user.accountStatus ?? ""
const isPaid = accountStatusesWithAppAccess.includes(accountStatus) || isAdminUser
const hasSubscription =
accountStatusesThatHasSubscription.includes(accountStatus) || isAdminUser
const isMustGoPlansRoute = !hasSubscription && isSigned
const isRenewSubscriptionRoute = req.nextUrl.pathname === AppRoutes.renewSubscription
const mustGoRenewSubscription = hasSubscription && !isPaid && isSigned
// if user have trial or subscription, but by some reason it is paused then redirect to renew it
if (mustGoRenewSubscription && !isRenewSubscriptionRoute) {
return NextResponse.redirect(new URL(AppRoutes.renewSubscription, req.url))
}
// if user shouldn't go to renew subscription, then restrict access to it
if (!mustGoRenewSubscription && isRenewSubscriptionRoute) {
const redirectRoute = isSigned ? AppRoutes.main : AppRoutes.signIn
return NextResponse.redirect(new URL(redirectRoute, req.url))
}
// if user signed and has no subscription or trial then he must go and choose plan
if (!isSubscriptionPlansRoute && isMustGoPlansRoute) {
return NextResponse.redirect(new URL(AppRoutes.subscriptionPlans, req.url))
}
// if user shouldn't go to chose plan, then restrict access to it
if (isSubscriptionPlansRoute && !isMustGoPlansRoute) {
const redirectRoute = isSigned ? AppRoutes.main : AppRoutes.signIn
return NextResponse.redirect(new URL(redirectRoute, req.url))
}
// if user signed and paid and trying to go auth route redirect main
if (isAuthRoute && isSigned && isPaid) {
return NextResponse.redirect(new URL(AppRoutes.main, req.url))
}
// if not signed user trying to go to protected route redirect sign-in
if (!isSigned && !isAuthRoute) {
return NextResponse.redirect(new URL(AppRoutes.signIn, req.url))
}
// if user is not admin and trying to go admin route redirect main or sign-in
if (isAdminRoute && !isAdminUser) {
const redirectRoute = isSigned ? AppRoutes.main : AppRoutes.signIn
return NextResponse.redirect(new URL(redirectRoute, req.url))
}
},
{
secret: envServer.NEXTAUTH_SECRET,
callbacks: {
authorized: () => true,
},
}
)
Uh-huh, functions seem a bit too large, don't they?
Iâm well aware of that, and I recognize the need for improvement. Inspired by a solution I encountered in a Vue.js project, I set out to create something similar.
Advanced Approach
I was thinking about the most convenient way for me to handle whether the user can access the route or not. I have the same condition for several routes, but sometimes routes have the same condition as another one, but with one more condition. Also, some conditions have more priority than others, so we need to handle this too.
Would it be great to define conditions for each route separately? Letâs try to do this.
First of all, letâs create the desired structure. I prefer an array of objects with routes and conditions, something like this:
[
{route: "/onboarding", condition: (token: JWT, url: string) => {}},
{route: "/profile", condition: (token: JWT, url: string) => {}}
]
Ok, it works, but what should this condition return?
In case the condition needs to redirect the user, then it must return a redirect.
Otherwise, let's return just null.
[
{
route: "/onboarding",
condition: (token: JWT, url: string) => {
if (!token?.user) return null;
return NextResponse.redirect(new URL("/sign-in", url))
}
},
{
route: "/profile",
condition: (token: JWT, url: string) => {
if (!token?.user) return null;
return NextResponse.redirect(new URL("/sign-in", url))
}
},
]
Great! It works, but we have 2 issues here.
How are we gonna run it?
The same condition is duplicated
Letâs start with the first one.
import type { NextRequestWithAuth } from "next-auth/middleware";
export type RouteConfig = {
condition: (token: JWT) => NextResponse<any> | null;
url: string;
};
export function runRoutesMiddleware(
req: NextRequestWithAuth,
config: RouteConfig[]
): NextResponse<any> | null {
const currentRouteConfig = config.find((route) =>
matchPath(route?.url, props?.nextUrl)
);
if (!currentRouteConfig) return null;
return currentRouteConfig.condition(req.token);
}
export default withAuth(
async function middleware(req) {
return runRoutesMiddleware(req, routesRulesConfig);
},
{
secret: env.NEXTAUTH_SECRET,
callbacks: {
authorized: () => true,
},
}
);
Here, I use a small hack called âmatchPathâ. Itâs the same function that we have in the react-router. but returns a boolean if the URL matches the pattern. matchPath(â/user/:idâ, âuser/123â) -> true.
Letâs solve the duplicated condition issue by separating the condition from the config. Also, we need to have some reusable functions that will allow us to create complex functions from small pieces. Let's call this small function âRuleâ. It will have the same type we saw before for condition.
type Rule = (token: JWT) => NextResponse<any> | null;
We need some functions to call these rules until it gets a response from one of them. Here it is.
export function executeRules(
rules: Rule[],
url: string,
token: JWT,
ruleIndex: number = 0
): ReturnType<Rule> | void {
if (ruleIndex > (rules?.length || 0) - 1) return;
const result = rules?.[ruleIndex]?.(token, url);
if (!result) {
return executeRules(rules, url, token, ruleIndex + 1);
} else {
return result;
}
}
And adjust our runRoutesMiddleware
function
export function runRoutesMiddleware(
req: NextRequestWithAuth,
config: RouteConfig[]
): NextResponse<any> | void {
const currentRouteConfig = config.find((route) =>
matchPath(route?.url, props?.nextUrl)
);
if (!currentRouteConfig) return;
return executeRules(
currentRouteConfig?.rules,
req.url,
req.nextauth.token
);
}
Letâs create our rule
export const isAuthenticatedRule: Rule = (token, url) => {
if (!token?.user) return null;
return NextResponse.redirect(new URL("/sign-in", url))
}
Then use it in our config
[
{
route: "/onboarding",
rules: [isAuthenticatedRule]
},
{
route: "/profile",
rules: [isAuthenticatedRule]
},
]
Thatâs it! Now we have a very simple flow for creation, editing, deletion and reprioritisation of our rules for every page. Sounds good, doesnât it?
Next.js Server Actions Lessons Learned
I have been building web apps for years, and one thing that has always been a pain is managing that messy back-and-forth between the client and the server. Next.js made things easier, especially with server-side rendering, but it still felt like there was a missing piece. Then came Server Actions, when I first heard about them, I was skeptical â âServer code directly in React components? Sounds like a recipe for disasterâ. But after using them for a few projects, Iâm a convert. This section is my brain dumb on Server Actions â how they work, what they matter, and when they can be useful (and when they might be trouble than they are worth).
What are the Next.js Server Actions?
So, what exactly are these Server Actions? Basically, they are the way to write server-side code â the stuff that is used to live in separate API routes â right inside your React components. Instead of creating a separate file for server interactions or business logic, you can now directly put that logic where they are being used.
The secret sauce is the âuse serverâ directive. Think of it as a tag that tells Next.js, ârun this code on the serverâ. You can tag the entire file or just specific functions. Hereâs a quick example.
// app/actions.ts
"use server";
export async function addItemToCart(itemId: string, quantity: number) {
// This runs on the server
console.log(`Adding ${quantity} of item ${itemId} to the cart...`);
}
Now, if youâve used Next.js before, you might be thinking, âWait, isnât that what API routes are for?â And yes, API routes have been the traditional way to handle server stuff. But Server Actions are different. They are more tightly integrated with your components. Instead of separate files and a bunch of fetch calls, you can have the logic right there, next to your UI.
Of course, Server Actions arenât meant to replace API routes entirely. If you are building an public API or need to talk to the external services, API Routes are still a way to go. But for those common tasks that are deeply tied to your UI, especially data mutations, Server Actions can be a game-changer. They are like specialized tools in your toolbox, not a replacement for the whole toolbox.
How does Next.js Server Actions Work Under The Hood
Understanding the underlying mechanisms is the key to using them effectively, and, of course, debugging them when things go wrong.
First up, that "use server"
directive. As we touched upon, itâs your way of telling Next.js what code should run on the server. You can either put it at the top of the file, which makes every exported function in that file a Server Action, or you can add it to individual functions. Generally, it's cleaner to keep Server Actions in dedicated files. It makes things more organized. Here is an example of a file with multiple Server Actions:
// app/actions/products.ts
"use server";
export async function addProduct(data: ProductData) {
// ... runs on the server
}
export async function deleteProduct(productId: string) {
// ... also runs on the server
}
Now, when you call a Server Action from a client component, itâs not just a regular function call. Next.js does some magic behind the scenes â itâs like an RPC (Remote Procedure Call) process. Hereâs the breakdown: your client code calls the Server Action function. Next.js then serializes the arguments you passed â basically, converting them into a format that can be sent over the network. Then, a POST
request is fired off to a special Next.js endpoint, with the serialized data and some extra info to identify the Server Action. The server receives the request, figures out which Server Action to run, deserializes arguments, and executes the code. The server then serializes the returned value and sends it back to the client. The client receives the response, deserializes it, and â this is the cool part â automatically re-renders the relevant parts of your UI.
The serialization part is where things get interesting. Weâre not just dealing with simple strings and numbers here. What if you need to pass a Date
object or a Map
? Next.js handles the serialization and deserialization. Here is an example to demonstrate that:
// app/actions/data.ts
"use server";
export async function processData(date: Date, data: Map<string, string>) {
console.log("Date:", date); // Correctly receives the Date object
console.log("Data:", data); // Correctly receives the Map object
return { updated: true };
}
Server Actions are tightly integrated with Reactâs rendering. For instance, you can hook a Server Action directly to a form submission using the action
attribute. Next.js handles all the messy details for you. Like this:
// app/components/MyForm.tsx
"use client"
import { myServerAction } from '@/app/actions';
export default function MyForm() {
return (
<form action={myServerAction}>
{/* Form fields */}
<button type="submit">Submit</button>
</form>
);
}
Or, if you want more control, just call the Server Action from an event handler:
"use client"
import { myServerAction } from '@/app/actions';
export default function MyComponent() {
const handleClick = async() => {
const result = await myServerAction();
// Handle the result
}
return <button onClick={handleClick}>Click Me</button>
}
Anh, the best part? After the Server Action completes, Next.js automatically re-renders the parts of your UI that might have changed because of it. No more manually fetching data or updating the state after a mutation. It just works if the user doesnât have JavaScript enabled or if itâs still loading; forms with Server Actions will work as regular HTML forms. Once JS is available, it will be enhanced by Next.js.
Hereâs a diagram to visualize the process:
Now, Server Actions arenât a magic bullet, and Iâve run into a few gotchas, which weâll get to later. But they do streamline a lot of the tedious work involved in client-server communication.
Why Do Server Actions Matter in the Current Landscape?
Letâs be real, the world of web development is constantly throwing new things at us. So, why should we care about Server Actions? Hereâs the deal: building modern web apps is complicated. We want these rich, interactive experiences, but managing the communication between the client and server can be a real pain. We often end up spending more time on the plumbing â API routes, data fetching, state management â than on the actual features users care about.
Server Actions tackle this problem head-on. By letting us put server-side code right in our React components, they drastically simplify things. Think about it: no more separate API route files, no more manually fetching data after a mutation. Your code becomes more concise and easier to follow, especially for smaller teams or solo developers. Iâve found that on smaller projects, Server Actions have cut down development time significantly.
And itâs not just about convenience. Server Actions can also boost performance. By reducing those back-and-forth trips between the client and server, especially for things like updating data, we can make our apps feel snappier. Fewer network requests mean faster loading times, and thatâs a win for user experience. Plus, they play nicely with Next.jsâs caching features, so you can optimize things even further.
Security is another big win. With Server Actions, sensitive operations â database queries, API calls with secret keys, etc. â stay on the server. Thatâs a huge relief in todayâs world of increasing security threats. Also, they are always invoked with POST
request.
Server Actions are also part of a bigger trend. Full-stack frameworks like Next.js are blurring the lines between frontend and backend. Server Actions are a natural step in that direction, letting developers handle more of the application lifecycle without needing to be a backend guru. This doesnât mean specialized roles are going away, but it does mean that full-stack developers can be more efficient and productive.
Now, Iâm not saying Server Actions are perfect or that they should replace every other way of doing things. But they do offer a powerful new approach, especially for data-heavy applications. Theyâre a significant step forward for Next.js and, in my opinion, for full-stack development in general.
The Caveats and Criticisms of Server Actions: A Reality Check
Like any technology, they have their downsides, and itâs important to go in with eyes wide open. Iâve learned a few things the hard way, and Iâm here to share them.
One of the biggest criticisms is the potential for tight coupling. When your server-side code lives right inside your components, itâs easy to end up with a less modular, harder-to-maintain codebase. Changes to your backend logic might force you to update your frontend, and vice versa. For complex projects or teams that need a strict separation of concerns, this can be a real problem. You need to be disciplined and organized to prevent your codebase from becoming a tangled mess.
Then thereâs the learning curve. While the basic idea of Server Actions is simple, mastering all the nuances â serialization, caching, error handling â takes time. You need to really understand the difference between client and server code execution and how to structure your actions for optimal performance and security. The mental model is different, and it takes some getting used to.
Debugging can also be a pain. When something goes wrong in a Server Action, you canât just rely on your trusty browser dev tools. Youâll need to get comfortable with server-side debugging techniques â logging, tracing, and so on. Next.js has improved its error messages, but itâs still more complex than debugging client-side code.
Performance is generally a plus with Server Actions, but if you overuse them, you can actually make things worse. Every Server Action call is a network request. Too many requests and your app will feel sluggish. Next.jsâs caching helps, but you need to be strategic about it. Theyâre great for handling data mutations but might not be ideal for complex queries or aggregations.
Finally, thereâs the issue of vendor lock-in. Server Actions are a Next.js thing. If you decide to move away from Next.js in the future, youâll have to rewrite all your Server Actions. Thatâs something to consider, especially if youâre worried about long-term flexibility.
So, are Server Actions worth it despite these drawbacks? In my opinion, yes, but theyâre not a magic solution. You need to use them thoughtfully and understand their limitations. Theyâre a powerful tool, but like any tool, they can be misused. They are best used for data mutations and operations that are tightly coupled to your UI and need to be on the server.
Real-World Example: Add to Cart
Letâs see how Server Actions can be applied in a real-world scenario. Imagine weâre building an e-commerce platform, and we need a feature to add products to a shopping cart. Hereâs how we could implement it using a Server Action, incorporating some crucial best practices along the way.
// app/actions.ts
"use server";
import { db } from "@/lib/db"; // Your database client
import { revalidatePath } from "next/cache";
export async function addItemToCart(userId: string, productId: string, quantity: number) {
try {
// Input validation
if (!userId || !productId || quantity <= 0) {
throw new Error("Invalid input data");
}
// Check for product existence
const product = await db.product.findUnique({
where: { id: productId },
});
if (!product) {
throw new Error("Product not found");
}
// Handle the cart item
const existingCartItem = await db.cartItem.findFirst({
where: { userId, productId },
});
if (existingCartItem) {
await db.cartItem.update({
where: { id: existingCartItem.id },
data: { quantity: existingCartItem.quantity + quantity },
});
} else {
await db.cartItem.create({
data: { userId, productId, quantity },
});
}
// Cache revalidation to reflect the changes on the pages
revalidatePath(`/products/${productId}`);
revalidatePath(`/cart`);
return { success: true, message: "Item added to cart" };
} catch (error) {
console.error("Error adding item to cart:", error);
// Handle errors gracefully
return { success: false, message: "Failed to add item to cart" };
}
}
// app/components/AddToCartButton.tsx
"use client";
import { useState } from "react";
import { addItemToCart } from "@/app/actions";
import { useSession } from "next-auth/react";
export default function AddToCartButton({ productId }: { productId: string }) {
const { data: session } = useSession();
const [loading, setLoading] = useState(false);
const [message, setMessage] = useState("");
const handleClick = async () => {
setLoading(true);
setMessage("");
// Call the Server Action, passing data and handling the result
const result = await addItemToCart(session?.user?.id, productId, 1);
setLoading(false);
if (result.success) {
setMessage(result.message);
// or other side effects
} else {
setMessage("Error adding item to cart");
}
};
return (
<div>
<button onClick={handleClick} disabled={loading}>
{loading ? "Adding..." : "Add to Cart"}
</button>
{message && <p>{message}</p>}
</div>
);
}
This example demonstrates a few key best practices:
Input Validation: The Server Action validates the input to prevent errors and security vulnerabilities.
Error Handling: The
try...catch
block ensures that errors are handled gracefully and informative messages are returned to the client.Database Interaction: We use a hypothetical database client (
db
) to interact with the database. In a real app, you'd likely use an ORM like Prisma.Cache Revalidation: We use
revalidatePath
to keep the product and cart pages up-to-date.UI Logic Separation: The
AddToCartButton
component handles the UI and user interactions, keeping the Server Action focused on data and server-side logic.
This streamlined example showcases how Server Actions can simplify common e-commerce tasks while adhering to essential best practices. Remember to modularize your actions, keep UI logic separate, and always validate user inputs. While this provides a good starting point, more complex scenarios might require more sophisticated error handling, caching strategies, and database interactions.
https://medium.com/@sureshdotariya/next-js-15-mastery-series-part-2-server-actions-a7939ca5514e
When To Use React Query With Next.js Server Components
React Server Components have revolutionized how we think about data fetching in React applications. But what happens when you want to use React Query alongside server components? Should we always combine them? The answer might surprise you.
With React now running on both client and server, developers are grappling with how traditional client-side libraries like React Query fit into this new paradigm. The reality is more nuanced than simply âuse both everywhereâ.
Setting up React Query for Server Components
The foundation of using React Query with server components lies in proper setup. Hereâs a key pattern:
The Query Client Factory
import { isServer, QueryClient } from "@tanstack/react-query";
function makeQueryClient() {
return new QueryClient({
defaultOptions: {
queries: {
staleTime: 60 * 1000, // 1 minute
},
},
});
}
let browserQueryClient: QueryClient | undefined = undefined;
function getQueryClient() {
if (isServer) {
// Always create a new query client on the server
return makeQueryClient();
} else {
// Create a singleton on the client
if (!browserQueryClient) {
browserQueryClient = makeQueryClient();
}
return browserQueryClient;
}
}
Why This Pattern Matters
The server-client distinction is crucial:
Server: Always create a new query client instance for each request to avoid data leakage between users.
Client: Maintain a singleton to persist across component re-renders and suspend boundaries.
This pattern is especially important in Next.js, where the layout component is wrapped in a Suspense boundary behind the scenes. Without the singleton pattern, you would lose your client instance every time a component suspends.
The Server-Client Data Flow
Hereâs how data flows from server to client:
1. Server Component (Prefetching)
// posts/page.tsx - Server Component
export default async function PostsPage() {
const queryClient = getQueryClient();
// Prefetch data on the server
await queryClient.prefetchQuery({
queryKey: ['posts'],
queryFn: getPosts,
});
return (
<HydrationBoundary state={dehydrate(queryClient)}>
<PostsClient />
</HydrationBoundary>
);
}
2. Client Component (Consumption)
// PostsClient.tsx - Client Component
'use client';
export default function PostsClient() {
const { data: posts } = useQuery({
queryKey: ['posts'],
queryFn: getPosts,
});
return (
<div>
{posts?.map(post => (
<div key={post.id}>{post.title}</div>
))}
</div>
);
}
3. The Hydration Bridge
The HydrationBoundary
component bridges the server and client by:
Dehydrating the query client state on the server
Rehydrating it on the client
Making prefetched data immediately available
This Is Actually Good!
Data is prefetched on the server
Hydrated to the client without additional network requests
The user sees data immediately without loading states
This is exactly what makes React Query with Next.js so powerful â you get server-side rendering with client-side cache management, giving you the best of both worlds for performance and user experience.
What Not To Do
A common mistake is fetching data in server components and trying to use it directly:
// â Don't do this
export default async function PostsPage() {
const queryClient = getQueryClient();
const posts = await queryClient.fetchQuery({
queryKey: ['posts'],
queryFn: getPosts,
});
return <PostsClient posts={posts} />;
}
Why this breaks: Server components donât re-render. If the client-side query cache gets invalidated or updated, your server component will display stale data, creating UI inconsistencies.
Why doesnât the client refresh data on the first load?
When using React Query with Next.js Server Components, something important happens behind the scenes during the initial page load.
First, your server component fetches the data ahead of time using React Queryâs prefetchQuery. This means the server already has the data ready before sending the page to the browser.
Then, using the <HydrationBoundary>, this prefetched data is passed down to the client, so React Query on the client side starts with a fully populated cache.
Because the data is already available and considered fresh, React Query doesnât make a new network request when the page loads in the browser. It simply reads from the cache. This improves performance and avoids unnecessary data fetching.
However, if you change a filter or the query becomes stale, React Query will then fetch new data as needed.
This setup allows you to:
Render data instantly on first load
Avoid duplicate fetching
Keep the client-side declarative (using useQuery)
Maintain a clean separation between server and client responsibilities
In short, the server does the heavy lifting up front, and the client reuses that work efficiently.
When to Use React Query with Server Components
React Query with server components makes sense when you need:
1. Client-Specific Features
Infinite Queries for pagination and infinite scroll:
// Server prefetch
await queryClient.prefetchInfiniteQuery({
queryKey: ['posts'],
queryFn: ({ pageParam = 1 }) => getPosts(pageParam),
});
// Client usage
const { data, fetchNextPage, hasNextPage } = useInfiniteQuery({
queryKey: ['posts'],
queryFn: ({ pageParam = 1 }) => getPosts(pageParam),
getNextPageParam: (lastPage) => lastPage.nextPage,
});
2. Real-time Updates
When you need optimistic updates, cache invalidation, or real-time synchronization across components.
3. Complex State Management
For applications requiring sophisticated caching strategies, background refetching, or retry logic.
**When to Skip React Query
**Often, youâre better off with pure server components:
// Simple and effective
export default async function PostsPage() {
const posts = await getPosts();
return (
<div>
{posts.map(post => (
<PostCard key={post.id} post={post} />
))}
</div>
);
}
This approach offers:
Better performance: No JavaScript bundle for data fetching
Simpler architecture: Fewer moving parts
Better SEO: Content rendered on the server
Faster initial load: No client-side fetching delay
Making the Right Choice
Consider these questions:
Do you need client-side interactivity like infinite scroll, real-time updates, or optimistic mutations?
Is your data relatively static or does it change frequently?
Do you need complex caching strategies or is simple server-side fetching sufficient?
Are you building a highly interactive app or primarily displaying content?
Best Practices
1. Start Simple
Begin with pure server components. Add React Query only when you need client-specific features.
2. Use Appropriate Stale Times
Set longer stale times when prefetching to avoid immediate refetches:
defaultOptions: {
queries: {
staleTime: 60 * 1000, // Prevent immediate refetch after prefetch
},
}
3. Separate Concerns
Keep your client components unaware of server prefetching. They should work independently.
4. Consider Bundle Size
React Query adds to your JavaScript bundle. Ensure the benefits outweigh the costs.
Real-world use cases
Example 1: Basic Query â Filter-based query caching
This example demonstrates React Queryâs fundamental capabilities with Next.js App Router. It showcases:
Server-side prefetching that hydrates the client cache on initial load
Filter-based query keys that maintain separate cache entries per filter
Automatic cache invalidation after the configured stale time (60 seconds)
Clean separation between server and client components
The UI displays a collection of shoes that can be filtered by category. When users switch filters, you can observe React Queryâs intelligent caching, only fetching new data when needed, while serving cached data instantly.
Example 2: Infinite Query â Infinite scroll with pagination
This example showcases React Queryâs advanced infinite scrolling capabilities. Key features include:
Implementation of useInfiniteQuery for paginated data loading
Automatic loading of the next pages as the user scrolls to the bottom
Server-side prefetching of the initial data page with prefetchInfiniteQuery
Cursor-based pagination handling a dataset of 100 items in batches of 10
Intersection Observer integration for detecting when to load more data
The UI demonstrates a real-world infinite scroll implementation with loading indicators and smooth state transitions, all while maintaining the benefits of React Queryâs caching system.
These examples together provide a comprehensive look at how React Query integrates with Next.js to solve common data fetching challenges.
When Should I Use Server Action (Next.js 14)
I started experimenting with the new Next Server Actions, I quickly learned to love it. They felt like a clean and efficient solution for most of my server requests.
But as I started to play with it and implement it more, I started noticing I was introducing performance issues, unnecessary roundtrips between the client and the server, and even some dangerous API security issues.
I quickly learned that Server Actions werenât always the most efficient choice, especially for straightforward data fetching.
So, letâs dive in. In the next few paragraphs, I will explain the different use cases for Server Actions and show you how they can be a game-changer in your Next.js app, especially when used in the right context.
What are Server Actions
Server Actions (soon to be named Server Functions in React 19) are a relatively new feature introduced by React to simplify data handling and move more logic to the server side. Next.js has quickly incorporated its own twist in Next.js 14, offering good complementarity with Server-Side Components.
Server Actions were introduced to simplify server-side mutations (e.g, form submissions or database writes) in the Next.js App Router. They enable an RPC-like mechanism. When you call a Server Action from a client component, Next.js serializes the call and sends a POST
request under the hood to execute that function on the server, then returns the result to your app. This is powerful for things like creating a new record or processing a form without setting up a separate API route. After the action runs, Next.js can automatically re-render affected UI parts for you.
Crucially, though, Server Actions were designed for mutations, not queries. The React team explicitly notes that Server Actions are designed for mutations that update server-side state; they are not recommended for data fetching. In fact, Next.js documentation reiterates that data fetching should primarily happen in Server Components, whereas Server Actions are not intended for data fetching but for mutations. Using them purely to read data breaks the intended separation of concerns.
In the following lines, we will focus only on the Next.js implementation:
When to use Server Actions
Event handling / Performing Database Mutations
Server actions allow you to perform server operations and database mutations securely without exposing database logic or credentials to the client. They drastically reduce and simplify your code because they remove the need to write a specific API route for your operations.
'use server'; export async function handleServerEvent(eventData: any) { // Process any server event const res = await someAsyncOperation(eventData); if (!res.ok) { throw new Error('Failed to handle event'); } return { res, message: 'Server event handled successfully' }; }
Handling form submissions
Similar to the first point, Server Actions are particularly useful for processing form inputs that need server-side handling. They provide a straightforward way to handle form data on the server, ensuring data validation and integrity without exposing the logic to the client and without having to implement elaborate API endpoints.
'use server'; export async function handleFormSubmit(formData: FormData) { const name = formData.get('name') as string; const email = formData.get('email') as string; const message = formData.get('message') as string; // Process the form data const res = await saveToDatabase({ name, email, message }); if (!res.ok) { throw new Error('Failed to process the form data'); } return { res, message: 'Form submitted successfully' }; }
Fetching data from client components
Server Actions can also be useful for quick data fetching, where a clean developer experience (DX) is crucial. It can simplify the fetch process by typing data access directly to a component without the need for intermediary API layers. Moreover, when using Typescript, Server Actions make using types seamless because everything is within the same function boundary.
// Simple server action to fetch data from an API 'use server'; export async function fetchData() { const res = await fetch('https://api.example.com/data'); if (!res.ok) { throw new Error('Failed to fetch data'); } return res.json(); }
Working with Next.js and its server component, you already have this very practical way of using server-side code to fetch data and pre-render the page on the server.
But Server Action now introduces a brand new way to also do that from your client-side components! It can simplify the fetch process by typing data access directly to the component that needs it, without the need to use
useEffects
hooks or client-side data fetching libraries.Moreover, when using TypeScript, Server Actions make typing seamless because everything is within the same function boundary, providing a great developer experience overall.
Potential Pitfalls with Server Actions
Donât use server actions from your server-side components
The simplicity and great DX of Server Actions could make it tempting to use them everywhere, including from a server-side component, and it would work! However, it doesn't really make any sense. Indeed, since your code is already running on the server, you already have the means to fetch anything you need and provide it as props to your page. Using Server Actions here would delay data availability as it causes extra network requests.
For client-side fetching, Server Actions might also not be the best option. First of all, they always automatically use POST requests, and they can not be cached automatically like a GET request. Secondly, if your app needs advanced client-side caching and state management, using tools like TanStack Query (React Query) or SWR is going to be way more effective for that. However, I havenât tested it myself yet, but itâs apparently possible to combine both and use TanStack Query to call your server actions directly.
Server Actions Do Not Hide Requests
Be extremely careful when using server actions for sensitive data. Server Actions do not hide or secure your API requests. Even though Server Actions handle server-side logic, under the hood, they are just another API route, and POST requests are handled automatically by Next.js.
Anyone can replicate them by using a Rest Client, making it essential to validate each request and authenticate users appropriately. If there is sensitive logic involved, ensure you have
proper authentication and authorization checks
within your Server Actions.Note: Additionally, consider using the very popular next-safe-actions package, which can help secure your actions and also provide type safety.
Every Action Adds Server Load
Using Server Actions might feel convenient, but every action comes at a cost. The more you offload onto the server, the greater the demand for server resources. You may inadvertently increase your appâs latency and cloud cost by using Server Actions when client-side processing would suffice. Lightweight operations that could easily run on the client, like formatting dates, sorting data, or managing small UI state transitions, should stay on the client side to keep your server load minimal.
Classic API Routes Might Be More Appropriate
There are cases when sticking with traditional API routes makes more sense, particularly when you need your API to be accessible to multiple clients. Imagine if you need the same logic for both your web app and a mobile app, duplicating the same Server Action logic into an API route will only double the work and maintenance. In these situations, having a centralized API route that all clients can call is a better solution, as it avoids redundancy and ensures consistency across your different clients.
Next.js Dependency and the Moving Target
Itâs important to note that Server Actions are closely integrated with Next.js, and both Next.js and React are evolving rapidly. This pace of development can introduce compatibility issues or breaking changes as these frameworks continue to update. If your application prioritizes stability and long-term support, relying heavily on cutting-edge features like Server Actions could result in unwanted technical debt. Weighing the stability of traditional, well-established methods against the appeal of new features is always advisable.
5 Next.js Image Pitfalls That Hurt Performance
Last month, I watched my e-commerce product page from a decent load time to a painful 4.2 seconds after implementing Next.js Image across our dynamic product catalog. Our bounce rate spiked, customers abandoned carts, and I realized something brutal: the âImage Componentâ was destroying my siteâs performance. After weeks of debugging slow load times, failed builds, and frustrated users, I discovered that sometimes the optimization creates more problems than it solves.
While working on the e-commerce website using Next.js 14, I learned the hard way that the Next.js Image Component isnât always the performance hero it meant to be. After switching to targeted alternatives to our dynamic images from third-party APIs, I cut out product pagesâ load times from 4.2 seconds to 1.8 seconds â a 57% improvement that saved our conversion rate.
Donât get me wrong â the Next.js Image Component is powerful. It automatically handles lazy loading, WebP conversion, responsive sizing, and more. But if you are dealing with dynamic image sources, high-volume image pages, or mysterious performance bottlenecks, this section might save you weeks of headaches like it did for me.
Understanding Next.js Image Component (And Its Hidden Cost)
The Next.js Image Component promises seamless image optimizations through features like:
Automatic format conversion (WebP, AVIF)
Lazy loading with Intersection Observer
Responsive sizing with srcset generation
Blur placeholder support
Priority loading for above-the-fold images
Under the hood, Next.js Images rely on either Vercelâs Image Optimization API (in production) or the Sharp library (for local development and self-hosted deployments). This means every image gets processed on-demand, which sounds great in theory.
The reality? On-demand optimization can become a performance bottleneck, especially when dealing with unpredictable image sources or high-traffic scenarios. Hereâs when that âoptimizationâ starts working against you.
5 Key Scenarios When You Should Not Use Next.js Image
1. Dynamic Image Sources from Third-Party APIs
The Problem: This is the big one that hit my e-commerce site hard. When your images come from external APIs with unpredictable URLs, domains, or formats, Next.js Image becomes a liability rather than an asset.
Real-World Example: Our product catalog pulls images from a supplierâs API. These images come from random CDN domains, vary in size and format, and change frequently. Hereâs what happens with Next.js Image.
// This creates a bottleneck
<Image
src={`https://random-supplier-cdn-${Math.random()}.com/product-${id}.jpg`}
alt="Product image"
width={500}
height={500}
/>
What Goes Wrong:
Domain Configuration Nightmare: Next.js 14 requires pre-configuring image domains in
next.config.js
. With dynamic third-party sources, this becomes impossible to maintain.Runtime Processing Delays: Each new image URL triggers real-time optimization, adding 2â3 seconds to initial load times.
404 Errors: Unconfigured domains default to broken images or fallback behavior.
The Better Solution:
// Direct img tag with external CDN optimization
<img
src={`https://your-cdn.com/optimize?url=${encodeURIComponent(dynamicImageUrl)}&w=500&q=80`}
alt="Product image"
loading="lazy"
style={{ width: '100%', height: 'auto' }}
/>
Performance Impact: After switching to direct <img>
tags with Cloudinary URL transformations, our product page load times dropped from 4.2 seconds to 1.8 seconds.
2. High-Volume Image Pages (Product Grids, Galleries)
The Problem: Pages displaying dozens or hundreds of images simultaneously can overwhelm Next.js Imageâs optimization pipeline.
Real-World Example: Our category pages show 48 products per page. With Next.js Image, the browser attempts to optimize multiple images simultaneously, creating a processing queue that delays the entire page render.
// This kills performance on high-volume pages
{products.map(product => (
<Image
key={product.id}
src={product.imageUrl}
alt={product.name}
width={300}
height={300}
/>
))}
What Goes Wrong:
Server Overload: Each image optimization request consumes server resources
Concurrent Processing Limits: Most hosting platforms limit simultaneous image processing
Waterfall Loading: Images load sequentially rather than in parallel
Cost Implications: Vercelâs Image Optimization API charges per optimization request
The Better Solution:
// Pre-optimized images with native lazy loading
{products.map(product => (
<img
key={product.id}
src={`${product.optimizedImageUrl}?w=300&h=300&fit=crop`}
alt={product.name}
loading="lazy"
style={{ aspectRatio: '1/1', objectFit: 'cover' }}
/>
))}
Performance Impact: Moving to pre-optimized CDN images reduced our category page Largest Contentful Paint (LCP) from 3.4s to 1.2s.
3. Unsupported or Unpredictable Image Domains
The Problem: Next.js Imageâs security model requires explicit domain configuration, but modern applications often work with dynamic or user-generated content from unknown sources.
Real-World Example: Our platform allows users to upload images or import from social media. These images come from countless domains we canât predict or pre-configure.
// next.config.js becomes unmaintainable
module.exports = {
images: {
domains: [
'cdn1.example.com',
'cdn2.example.com',
'user-uploads.s3.amazonaws.com',
'instagram.com',
'facebook.com',
// ... hundreds more?
],
},
}
What Goes Wrong:
Maintenance Hell: Constantly updating domain configurations
Security Concerns: Wildcard domains create vulnerabilities
Build Failures: Invalid or inaccessible domains break deployments
User Experience: Broken images when domains arenât configured
The Better Solution:
// Proxy through your own domain or use unrestricted img tags
const ProxiedImage = ({ src, alt, ...props }) => {
const proxiedSrc = `/api/image-proxy?url=${encodeURIComponent(src)}`;
return <img src={proxiedSrc} alt={alt} loading="lazy" {...props} />;
};
4. Custom Optimization Requirements
The Problem: Next.js Image applies default optimization settings that might not suit specialized use cases like high-resolution product zoom, medical imaging, or artistic portfolios.
Real-World Example: Our drop-shipping e-commerce site needs pixel-perfect product zoom functionality. Customers expect to see every detail, but Next.js Imageâs default 75% quality compression destroys the fine details.
// Default optimization destroys image quality
<Image
src="/diamond-ring-4k.jpg"
alt="Diamond ring detail"
width={2000}
height={2000}
quality={75} // Not enough for detailed zoom
/>
What Goes Wrong:
Quality Loss: Default compression settings reduce image fidelity
Limited Control: Fewer customization options compared to dedicated image services
Format Restrictions: Automatic format conversion might not preserve quality
Size Limitations: Processing very large images can timeout or fail
The Better Solution:
// Custom optimization pipeline for high-quality zoom
const HighQualityImage = ({ src, alt }) => {
return (
<img
src={`https://your-cdn.com/transform?url=${src}&q=95&f=auto&w=2000`}
alt={alt}
style={{ maxWidth: '100%', height: 'auto' }}
/>
);
};
5. Build-Time and SSG Issues
The Problem: Static Site Generation (SSG) with Next.js Image can fail when image URLs are unavailable at build time or when dealing with large image sets.
Real-World Example: Our product catalog generates static pages for 10,000+ products. During build time, some third-party image URLs become temporarily unavailable, causing the entire build to fail.
// This can break SSG builds
export async function getStaticProps({ params }) {
const product = await fetchProduct(params.id);
return {
props: {
product: {
...product,
image: product.dynamicImageUrl // Might be unavailable at build time
}
}
};
}
What Goes Wrong:
Build Failures: Invalid URLs cause build processes to crash
Increased Build Times: Image optimization during build significantly slows deployment
Memory Issues: Processing many large images can exhaust build server memory
Inconsistent Results: Some images optimize successfully while others fail
The Better Solution:
// Defer image loading to client-side
const ProductImage = ({ src, alt }) => {
const [imageSrc, setImageSrc] = useState('/placeholder.jpg');
useEffect(() => {
// Validate and set image source on client-side
const img = new Image();
img.onload = () => setImageSrc(src);
img.onerror = () => setImageSrc('/fallback.jpg');
img.src = src;
}, [src]);
return <img src={imageSrc} alt={alt} loading="lazy" />;
};
Best Practices for Next.js Image Alternatives
When you decide to skip Next.js Image, here are proven alternatives that maintain performance:
1. Native Lazy Loading
Modern browsers support native lazy loading:
<img src="/image.jpg" alt="Description" loading="lazy" />
2. CDN-Based Optimization
Use services like Cloudinary, Imgix, or ImageKit:
const optimizedUrl = `https://res.cloudinary.com/your-cloud/image/fetch/w_500,q_auto,f_auto/${originalUrl}`;
3. Custom Hook for Image Loading
const useOptimizedImage = (src, options = {}) => {
const { width = 500, quality = 80 } = options;
return `https://your-cdn.com/transform?url=${encodeURIComponent(src)}&w=${width}&q=${quality}`;
};
4. Intersection Observer for Custom Lazy Loading
const LazyImage = ({ src, alt, ...props }) => {
const [isLoaded, setIsLoaded] = useState(false);
const [isInView, setIsInView] = useState(false);
const imgRef = useRef();
useEffect(() => {
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
setIsInView(true);
observer.disconnect();
}
},
{ threshold: 0.1 }
);
if (imgRef.current) {
observer.observe(imgRef.current);
}
return () => observer.disconnect();
}, []);
return (
<div ref={imgRef} {...props}>
{isInView && (
<img
src={src}
alt={alt}
onLoad={() => setIsLoaded(true)}
style={{
opacity: isLoaded ? 1 : 0,
transition: 'opacity 0.3s ease'
}}
/>
)}
</div>
);
};
When You SHOULD Still Use Next.js Image
To be fair, Next.js Image is excellent for:
Static images in your
/public
directoryKnown, pre-configured domains with predictable image sources
Low-traffic sites where optimization delays arenât noticeable
Simple blogs or portfolios with minimal image complexity
When you need automatic blur placeholders and built-in lazy loading
The Next.js Image component is a powerful tool, but itâs not a silver bullet. After months of real-world testing on a high-traffic e-commerce site, Iâve learned that premature optimization can be worse than no optimization.
Before reaching for Next.js Image, ask yourself:
Are my image sources predictable and configurable?
Do I need real-time optimization, or can I pre-optimize?
Will the optimization delay impact user experience?
Am I dealing with high-volume image scenarios?
Do I have custom quality or format requirements?
If you answered âNOâ to the first question or âYESâ to any of the others, consider alternatives. Sometimes a simple <img>
tag with proper CDN optimization delivers better performance with less complexity.
How I Made a Next.js App Load 10x Faster
Set proper HTTP Headers for Caching
Caching is essential for improving performance and reducing server load. When a browser caches static assets (like JavaScript, CSS, and images), it can reuse those files instead of downloading them again on every visit. This significantly speeds up page loads for returning users.
For example, by setting a Cache-Control
header ike public, max-age=31536000, immutable
, you tell the browser to cache the file for one year and not check for updates. This works well for assets that donât change frequently, such as fonts, logos, or versioned build files.
You can configure caching headers in next.config.js
using the headers()
async function. This ensures consistent behavior across all your static files and can boost your appâs performance, especially on repeat visits.
module.exports = {
async headers() {
return [
{
source: '/:all*',
headers: [
{ key: 'Cache-Control', value: 'public, max-age=31536000, immutable' },
],
},
];
},
};
Embrace Server Components to minimize client-side JavaScript
One of the biggest features in Next.js 13 (fully embraced in Next.js 15) is React Server Components (SRCs). RSCs allow you to render the component on the server and send the pure HTML to the client, without shipping any JavaScript bundles to the browser. Results? Drastically reduce JS payloads and faster load times. In other words, components that donât need the interactivity can be delivered that are already âcookedâ from the server â the browser just needs to display the HTML without any hydrations and JavaScript bundles.
Why it helps: Every bit of JavaScript that the browser doesnât have to download and execute makes the page load faster and respond quicker. By keeping as much as possible on the server, you reduce the bundle size and improve startup time. As React expert Josh Comeau notes, âServer Components donât get included in the bundle size, which reduces the amount of JavaScript that needs to run, leading to better performance.â In practice, we have seen a project where moving a heavy data processing from the client-side to the server cut the client-side bundle to 30%, significantly improving load time*.*
How to use it: In Next.js App Router, components are served by default (unless you add the "use client"
directive). Design your pages so that any component that doesnât require browser interactivity (displaying data, static content, etc.) remains a Server Component. Fetch data directly in these components (e.g. using fetch()
in an async component) and render the content on the server. For parts that do need interactivity (e.g., a button click handler or dynamic state), isolate them into small Client Components "use client"
at the top. This way, you send minimal JS to the clientâonly the code needed for interactive elements â while the bulk of the UI arrives as static HTML.
Before/After example: Imagine a dashboard page that originally loads all data through a client-side JavaScript and renders a big React bundle on the browser. Users see a loading spinner for a couple of seconds while waiting for the browser to fetch data. By refactoring to use Server Components, the HTML page now comes pre-filled with data (no big client fetch needed), and the JS bundle size drops by 50%. The Time to First Byte (TTFB) improved because the server does data fetching in parallel, and First Input Delay (FID) improved since the browser has less JavaScript to execute before handling interactions. Users can now see the content instantly and are able to interact with the content within milliseconds of page load.
Stream and Selectively Hydrate for Instant Interaction
Delivering HTML fast is only half of the story â we also want users to interact with the page as soon as possible, without waiting for all JavaScript to finish loading. This is where React 18âs streaming SSR and selective hydration come into play. Next.js 15 (powered by React 18+) can stream HTML in chunks and hydrate portions of the UI incrementally, rather than blocking on the entire page. In practice, that means your app can show useful content sooner and become interactive faster, improving metrics like TTFB and FID.
With Streaming Server-Side Rendering, the server doesnât wait for the entire page before sending HTML. Instead, it can send a piece of the page as soon as possible when the data is ready (often gated by React <Suspense>
boundaries). Meanwhile, Selective Hydration ensures that once those HTML chunks arrive. If the user interacts with the part of the page that hasnât hydrated yet, React will prioritize hydrating that part first â no more waiting for the entire app to hydrate before any interactions work.
Why it helps: Traditional SSR would send the full HTML, but the page stayed non-interactive until all the JavaScript was downloaded and all components were hydrated in one go (an âall-or-nothingâ hydration). That could lead to a long delay in large apps, poor FDI, and a frustrating user experience. With streaming and selective hydration, smaller components can hydrate and become interactive immediately without waiting for larger, slower parts. This greatly improves First Contentful Paint (FCP) and FID. React 18âs architecture is a game-changer: âHTML streaming sends partial HTML as itâs ready, and selective hydration means you can start interacting with the app before all components have hydrated.â.
How to use it: Next.js handles a lot of this automatically under the hood, but you should structure your app to take advantage of it. Use Reactâs <Suspense>
boundaries to wrap parts of the UI that can load asynchronously. For example, you might suspense-wrap a product reviews section or user-specific info. Provide a lightweight fallback
(like a spinner or placeholder) that can be rendered immediately. Next.js will stream the page HTML with the fallback in place of the slow section, and hydrate the rest. Once the data for that section is ready, the server streams the HTML for it, and React swaps in the real content and hydrates it. The key is: split your UI into bite-sized chunks and use Suspense for anything that might delay rendering. This enables progressive hydration.
Real-world example: The Next.js team introduced Partial Prerendering in v15, which combines streaming and caching strategies. For instance, a dashboard page can prerender static metrics at build time (so they show up instantly) and stream in user-specific live data (like recent activity) when ready. The static parts are interactive immediately, and the dynamic part hydrates once loaded. This approach âsignificantly improves TTFB and LCPâ by getting useful content on screen fast. In one experiment, enabling streaming SSR with Suspense cut the Largest Contentful Paint from ~3s to ~1.5s, because the largest element (a hero image and headline) was sent immediately and not held up by slower widgets. Users could also click navigation links almost right away, whereas before they had to wait for the entire app bundle to load. The page felt instant.
Optimize Images For Faster Loads (Use Next/Image)
High-quality images are often the largest part of the page, so optimizing images is one of the impactful things we can do to improve load times and LCP. Next.js provides a built-in solution: the <Image>
component (next/image
), which handles a ton of images for you automatically. By using Next.jsâs Image Components for all your images, you will get out-of-the-box performance benefits:
Responsive sizing: It will automatically serve the right size image for each device, generating multiple versions and using modern formats like WebP. This avoids sending a huge 2000px-wide image to a mobile device that only needs 400px, saving bandwidth and time.
Lazy loading: Images are by default lazy-loaded (only fetched when about to scroll into view), which dramatically reduces initial load time. Studies show that loading images on scroll instead of all upfront can decrease initial page load time by up to 60%.
Preventing layout shift: Next/Image fixes a common culprit of bad CLS by requiring width/height (or using intrinsic sizes) so the browser knows the imageâs space in advance. No more content jumping around when images load â itâs handled.
Modern formats & compression: Next.js will automatically convert and serve images in newer formats like WebP/AVIF when supported, often 30% smaller than JPEG/PNG for the same quality. It also allows setting quality to balance size vs clarity. All of this means faster loads for users.
Blur-up placeholders: You can enable a low-res blurred placeholder that shows while the image loads, giving an immediate preview and improving perceived performance. This is a nice touch to avoid blank spaces.
In short, Next.jsâs image optimization delivers smaller, smarter images and defers non-critical ones, boosting your LCP and overall performance. As the docs put it, the Next/Image component provides âvisual stability (no layout shift) and faster page loads by only loading images when they enter the viewport, with optional blur placeholdersâ. And you hardly have to do a thing â just use <Image src={...} width={...} height={...} />
instead of a raw <img>
tag.
Pro Tips:
Always specify dimensions (width and height) for your images (or use
fill
for responsive layout). This ensures Next can reserve space to avoid CLS. If you import a static image, Next will auto-populate its intrinsic width/height for you.Use
priority
for above-the-fold images (like hero banners or logo) to load them ASAP, and let less important images stay lazy. This improves the Largest Contentful Paint if that image is the LCP element.Leverage the
sizes
attribute on<Image>
for responsive layouts. This helps the browser choose the optimal image variant (e.g. serve a smaller image on mobile).If you have many icons or small graphics, consider SVG or icon fonts, or use CSS sprites, to reduce HTTP requests. But for photos and complex imagery, Next/Image is your best friend.
Before/After example: A travel blog homepage was struggling with an LCP of 4.0 seconds due to multiple large image thumbnails loading at once. By switching those <img>
tags to Next <Image>
with proper sizes
and enabling lazy loading, the initial payload dropped by several MB. The LCP (which was the hero image) improved to 1.8s (well under the 2.5s good threshold), and overall page weight went down ~70%. The CLS issues caused by images suddenly popping in were eliminated (CLS went from 0.25 to near 0). The site felt snappy, and images still looked great â they were just delivered in a smarter way.
Optimize Fonts and Prevent Layout Shifts (Use Next/Font)
Custom web fonts can subtly slow down your site and even cause layout jank (text shifting or appearing late). In Next.js 15, you have a powerful tool to optimize fonts: the next/font
module. This utility will automatically handle font loading best practices â including self-hoisting fonts, preloading them, and controlling how they swap â all to boost performance.
When you use next/font
(either with Google Fonts or your custom fonts), Next.js will:
Inline critical font CSS and remove external network requests to font providers. This means no more waiting on Google Fonts servers; the font files are served from your own site, often faster and more reliably.
It preloads the fonts and uses efficient loading strategies. By default, Next fonts use
font-display: swap
(or similar) to avoid blocking text from rendering. Text will show in a fallback font immediately and then swap to the custom font when ready, which prevents long invisible text paint delays (important for good FID).Next.js can even subset fonts (only include the characters you need) or use variable fonts to reduce the number of font files. All of this reduces the download size for typography.
According to Vercelâs guidance, ânext/font will automatically optimize your fonts (including custom fonts) and remove external network requests for improved privacy and performance.â This improves performance by cutting out an entire round-trip to fetch fonts and by ensuring text is visible ASAP, avoiding a flash of invisible text (FOIT) or sudden layout shifts when fonts load (FOIT/flash of unstyled text can cause CLS when the font metrics differ).
Tips for fonts:
- Use
next/font/google
for Google Fonts: Instead of<link>
tags in your<head>
, use the Next font loader. For example:
import { Roboto } from 'next/font/google';
const roboto = Roboto({ subsets: ['latin'], weight: '400' });
// then in your layout/component:
<div className={roboto.className}>Hello</div>
This ensures the font CSS is included in the build and the font is preloaded.
Use variable fonts or limit weights: If possible, use a variable font that covers multiple weights/styles in one file. This reduces the number of font files to load. For example, the Inter font variable version can cover many styles in one file.
Include fallbacks: Choose a fallback system font that has similar metrics to your web font and include it in your CSS stack (e.g.
font-family: Roboto, sans-serif;
). This way if the custom font is delayed, the fallback is used without much visual difference. Next/font helps here by simplifying the setup.Beware of huge font files: If your font file is very large (many glyphs or languages), consider using subsets (only the characters needed) or separate font files for different locales. Unused font glyphs are just dead weight.
By optimizing fonts, youâll see improvements in CLS and FCP. No more pages where content jumps around once the font loads or, worse, text that isnât visible for a second. Itâs a small tweak that makes your site feel polished and fast. In our experience, moving from a standard Google Fonts embed to Nextâs automatic font optimization shaved about 200ms off the First Contentful Paint on a news site (since the text could render immediately with fallback and then swap smoothly). Core Web Vitals improved: CLS went practically to zero once we eliminated the layout shift that occurred when the custom font kicked in. Next.js 15âs font handling is robust â by using it, you basically check off another big item in the performance list with minimal effort.
Analyze and Trim Your Bundles (Bundle Analysis and Tree Shaking)
Sometimes the biggest performance gains come from simply shipping less code. Large JavaScript bundles delay both download and execution, hurting metrics like TTI (Time To Interactive) and FID. Thatâs why a key step in optimization is to analyze your bundles and eliminate anything that isnât absolutely needed on the client. The gold: Keep your client-side JavaScript bundles as lean as possible.
Start with bundle analysis: Next.js provides an official plugin for bundle analysis (@next/bundle-analyzer
). Enable this to get the visual tree map of your JS bundles. This will show you the size of each npm package and module in your app. Often, you will discover some surprising things: maybe a profile or a big library being included by accident, or duplicate copies of a dependency. As a best practice: âRegularly use tools like Bundle Analyzer to visualize your bundle composition and identify opportunities for optimizationâ.
What to look for:
Large libraries: Do you really need that heavy date library, or can you use a lighter alternative? For example, moment.js (huge) could be replaced with a smaller library or native Intl APIs. If a library is necessary, consider importing only specific parts (many libraries support modular imports)
Duplication: Ensure you are not importing two versions of a library. Bundle Analyzer will highlight if, say, lodahsh is included twice. This can happen if you have multiple versions in sub-dependencies. Resolve them or use a consistent version to avoid bloat.
Dead code: Tree Shaking usually removes code you donât use, but if only that code is written in a tree-shaking way (in ES module). Ensure libraries are up to date and use ESM, and avoid sneaky patterns that prevent tree-shaking(like importing an entire library when you only need one).
Modern build tools can eliminate a lot of dead code. In fact, projects leveraging aggressive tree-shaking often see 30%-50% reductions in bundle size. Next.js (with Turbopack and Webpack 5) will tree-shake as much as dependencies allow it. You can help by not importing things you donât need. For example, if you only need an icon from an icon set, donât import the whole set.
Actionable steps to trim bundles:
Run
next build
with analysis (usingANALYZE=true
environment or the plugin) and open the bundle report. Identify top offenders.Remove or split out heavy modules: If you find a module thatâs huge and only used on one page, consider dynamically importing it (see next tip) so itâs not in the main bundle. Or find a lighter alternative.
Optimize dependencies: Import only what you use. For instance,
import lodash
brings in everything (if not tree-shaken); instead, doimport debounce from 'lodash/debounce'
to get that one function (or use lodash-es, which is tree-shakable). Similarly, for date-fns, import individual functions instead of the whole library.Polyfills: Next.js by default polyfills only as needed. But double-check youâre not unintentionally pulling heavy polyfill packages. Use core-js with targets if needed to avoid full polyfills.
Strip dev code: Ensure
process.env.NODE_ENV
is set to production in the build (Next does this by default), so that any dev-only code or logging is dropped.
By iteratively trimming the fat, you can often get your main bundle to well under 100 KB gzipped (depending on the app). For example, one of our projects had a ~300 KB bundle; after analysis, we removed an unused markdown parser and switched a charts library for a lighter one, dropping to ~90 KB. The payoff was visible: faster load and interactivity. As one guide noted, even going from ~150KB to under 100KB can make a difference in load time. And of course, smaller bundles mean less JavaScript blocking the main thread, improving FID.
Remember, âreduce code on the client to a minimumâ is the core principle for performance. Every byte and every millisecond counts. By shipping only what is necessary, you not only speed up your app but also reduce the memory usage and battery drain on devices. Bundle Analysis and Tree-Shaking are your allies in this quest for a slimmer, faster Next.js app.
Code-Split By Dynamic Imports (Load Only What You Need)
Even after trimming your bundle, donât load all that code upfront if users donât immediately need it. Code-splitting is a technique to split your JS bundle into smaller chunks that can be loaded on demand. In Next.js, you automatically get code-splitting by page; each pageâs JS is split out, and you can and should go further with dynamic imports for parts of your pages that can be deferred. The ideal is to ship the minimal code for the initial load, and load the rest asynchronously when needed.
Using Next.js dynamic imports (or React.lazy), you can turn virtually any component into a separately loaded chunk. For example, suppose you have a very large chart component of the dashboard that isnât visible until the user scrolls down or interacts â you can import it dynamically so itâs not in the initial JS. This improves initial load times dramatically.
Benefits: Studies indicate that a well-structured code-splitting strategy can improve initial loading performance by ~40%. Weâve seen cases where dynamic imports reduced the initial bundle by 50%, making the app load twice as fast on first visit. It also has a business impact: faster initial loads can lead to higher conversion; one report noted 20â25% increased conversions with faster pages (users stick around when the first page loads quickly!).
How to do it in Next.js:
- Dynamic Imports In Next.js
Dynamic Imports are a cornerstone of lazy loading in Next.js, enabling code-splitting at the component level. This powerful feature allows you to load specific components or entire libraries when they are required by the user, rather than including them in the initial JavaScript bundle. Next.js provides the next/dynamic
utility, which is a composite of React's React.lazy()
and Suspense
, offering a seamless way to implement this optimization.
const Chart = dynamic(() => import('../components/Chart'), {
loading: () => <p>Loading chart...</p>, // Displays this while the Chart component loads
});
export default function Dashboard() {
return (
<div>
<h2>Sales Dashboard</h2>
<Chart />
</div>
);
}
Key Points:
The HeavyComponent is excluded from the initial bundle: This means the JavaScript for
HeavyComponent
is not downloaded when theHome
page initially loads.Loaded only when rendered: The componentâs code is fetched and executed only when React attempts to render
HeavyComponent
in the DOM.Automatic Code Splitting: Next.js automatically handles the creation of separated JavaScript chunks for dynamically imported components. Server Components are code-split by default, and lazy loading specifically applies to Client Components.
- Conditional Rendering
This strategy dynamically involves importing and rendering components only when a specific user interaction or condition is met. This is particularly useful for features that are not immediately visible or essential on page load, such as models, accordions, and video players that only active on a click.
import dynamic from 'next/dynamic';
import { useState } from 'react';
const VideoPlayer = dynamic(() => import('../components/VideoPlayer'));
export default function VideoSection() {
const = useState(false);
return (
<div>
<button onClick={() => setShowPlayer(true)}>Play Video</button>
{/* VideoPlayer component is only loaded and rendered when showPlayer is true */}
{showPlayer && <VideoPlayer />}
</div>
);
}
In this example, the VideoPlayer
component's code is only downloaded and rendered when users click a âPlay Videoâ button. This prevents the browser from downloading a potentially large video player library until the user explicitly requests the functionality.
- Intersection Observe API
The Intersection Observe API provides a performant way to detect when an element enters or exits the viewport. This is ideal for lazy loading components that are âbelow-the-foldâ (not immediately visible on the screen) or for implementing infinity scrolling patterns. By loading components when they are about to become visible, you can significanly reduce the initil load time and resource comsumption.
'use client'; // This component uses browser APIs and React Hooks
import { useEffect, useRef, useState } from 'react';
import dynamic from 'next/dynamic';
const Reviews = dynamic(() => import('../components/Reviews'));
export default function ReviewsSection() {
const ref = useRef(null);
const [isVisible, setIsVisible] = useState(false);
useEffect(() => {
const observer = new IntersectionObserver(([entry]) => {
if (entry.isIntersecting) {
setIsVisible(true);
observer.disconnect(); // Stop observing once the component is visible and loaded
}
});
if (ref.current) observer.observe(ref.current);
return () => observer.disconnect(); // Clean up observer on unmount
},);
return (
<div ref={ref}>
{/* Reviews component is only loaded and rendered when it becomes visible in the viewport */}
{isVisible && <Reviews />}
</div>
);
}
Here, the Reviews
component is dynamically loaded only when its containing div
(referenced by ref
) enters the user's viewport. This is the common pattern for sections like customer reviews, comments, or image galleries that appear further down a page.
- Route-Based Code Splitting
Next.js inherently optimizes performance through automatic route-based code splitting. This means that each page (or route) on your application is automatically split into its own independent JavaScript bundle. For example, the JavaScript for
pages/about.tsx
(or app/about/page.tsx
in the App Router, it is only loaded when the /about
route is accessed.
This automatic behavior is a significant advantage, as it ensures that users only download the code necessary for the specific pages they are viewing, rather than a monolithic bundle containing the entire applicationâs JavaScript. This leads to faster initial page loads and improved overall performance.
Tips: To maximize the benefits of route-based code splitting, avoid importing large libraries or components globally (e.g., in _app.tsx
or a root layout in the App Router) unless they are truly essential for every single page of your application. Global imports can negate the advantages of code splitting by forcing unnecessary code into every pageâs initial bundle.
- Lazy Loading Third-Party Scripts
Third-party scripts, such as analytics trackers, advertising scripts, or social media embeds, can often be significant performance bottlenecks. They can block the main thread, delay rendering, and natively impact Core Web Vitals. Next.js provides the next/script
component to give you fine-grained control over when and how these external scripts are loaded, preventing them from hindering your applicationâs performance.
import Script from 'next/script';
export default function Page() {
return (
<>
{/* This script will load during the browser's idle time, after the page is interactive */}
<Script
src="https://example.com/analytics.js"
strategy="lazyOnload"
/>
<p>Page content</p>
</>
);
}
Strategies Available with next/script
:
strategy="beforeInteractive"
: This loads the script before the page becomes interactive (before hydration). Use this for scripts that are critical for the page's initial functionality or appearance, but still need to be loaded by Next.js.strategy="afterInteractive"
: This loads the script after the page has become interactive (after React has hydrated the page). This is a good default for most non-critical scripts that don't need to block the initial render.strategy="lazyOnload"
: This defers the loading of the script until the browser's idle time, after the page has fully loaded and become interactive. This is ideal for scripts that are not essential for the core user experience, such as analytics or chat widgets.
By choosing the appropriate loading strategy, you can prevent third-party scripts from negatively impacting your siteâs initial load and interactivity.
Real-world example: On an e-commerce site, we had a hefty product comparison feature (with a big library) that was initially loading on every product page, though the user rarely used it. We changed it to load dynamically only when the user clicks âCompareâ. This removed ~100 KB from the product page bundle. The immediate effect was that he product pages loaded about 30% faster and FID improved because less JS was executed upfront. Users who needed the compare feature experienced a slight delay only when they invoked it, which is a fair trade-off. Overall engagement went up as more users stayed (fast content drew them in, and by the time they clicked compare, the chunk was loaded or in progress).
Another scenario: imagine a blog site with an interactive comments widget that loads below the article. By splitting the comments component (and maybe even using an <IntersectionObserver>
to preload it when the user scrolls near), the initial article content loads faster (good LCP), and the comments JS only loads if the user is going to see it. Users who just read and leave arenât penalized by that extra JS.
Bottom line: âUsers only download the necessary modules.â for what they are doing right now. This keeps the app fast and responsive. Next.js makes it easy with dynamic imports, so identify parts of your app are lazy-loaded and implement them. Monitor your webpack bundle analysis before and after â you should see certain chunks carved out. Also, test the user experience to ensure the loading states are acceptable (provide a nice spinner and skeleton if needed). When done right, users wonât even notice that parts of your app are loaded on demand; they will just feel the app is fast.
Leverage Edge Functions and CDN For Low TTFB
If your Next.js app is global (most of these days), one way to speed up responses is to run your code as code as possible to the user. Edge Functions allows you to deploy the server-side logic to data centers around the world, reducing latency and improving Time To First Byte (TTFB). Next.js on Vercel supports edge runtimes (for middlewares or API routes) that run on Vercelâs Edge Network, and similarly, you can deploy on Cloudflare Workers. The ideal is to avoid that slow transoceanic round trip for each request.
Think of Edge Functions as your âmini serversâ distributed worldwide. Instead of a user from Asia having to hit a server from North America (taking hundreds of milliseconds for distance), the request can be handled in Asia as well. As one developer described: âWhen logic executes close to the user, the TTFB drops significantly. Imagine a user in Singapore hitting an API request to a server in Northern Virginia â that round-trip is brutal. With Edge Functions, the request is handled in Singapore itself*: Lower TTFB, better FCP, and happier users.*â
Ways to use edge in Next.js:
Edge Runtime API Routes: You can create a file under
pages/api
or the new App Routerâs route handlers, and exportconfig = { runtime: 'edge' }
. This will deploy that function to the edge. Use it for things like personalization, geolocation-based content, authentication checks, etc., where responding quickly is crucial.Middleware: Next.js middleware (under
middleware.js
) always runs at the edge by default. Use it for redirecting or rewriting on the fly with virtually no latency hit, since it runs close to the user.Cache and static assets on CDN: Next.js automatically serves static files (including static pages) via CDN. Ensure you take advantage of this by using
getStaticProps
/getStaticPaths
or the App Routerâsfetch
caching and revalidation settings. Static assets should be CDN-served so that a user in London gets the file from a European server, one in New York gets it from US East, etc. This happens by default on platforms like Vercel.
By using edge functions and globally distributed caching, youâll improve not just TTFB but often other metrics like FCP (first paint) since initial HTML arrives sooner. It also allows for dynamic content thatâs still fast. For example, you can personalize content at the edge (e.g., show region-specific promos, or the userâs name if logged in) without a slow origin fetch. One can âinject personalization at the edge without slowing down client-side loadâ, achieving fast dynamic pages with no hydration jankdev.to.
Real-world story: A startup had its Next.js app deployed on a single region. Users far from that region experienced TTFB of 500ms to 1s. After migrating key APIs to edge functions and enabling full page caching on the edge for public pages, global users saw TTFB drop to ~100â200ms. For instance, a user in India got a response from a Mumbai edge node in 100ms, whereas before it was ~700ms from a US server. This shaved significant time off the First Contentful Paint as well â the content started arriving faster. Core Web Vitals improved; for example, faster TTFB contributed to an FCP improvement on slow 3G connections by nearly 30%. Edge functions essentially acted as a frontend performance multiplier, as one article put itdev.to, by bringing server-side rendering closer to users.
Note: Edge functions do have some limitations (no full Node.js environment, and cold starts, though typically very small). Use them for the performance-critical path, and test under load. The payoff is worth it when your audience is globally distributed. If using Vercel, also consider their Edge Middleware and Edge Config for quick data lookups at the edge (like feature flags or A/B tests) without back-and-forth to the origin.
In summary, run your app at the edge and cache wisely. It lowers latency, yielding snappier interactions. âIf youâre serious about slashing TTFB and building experiences that feel instant, edge functions are a must in your stack.â.
Pre-Render and Cache as Much as Possible (SSR, ISR, and Partial Prerendering)
Fetching data and rendering on every request can be expensive and slow. Whenever feasible, let Next.js pre-render pages or parts of pages ahead of time and serve them from cache. Next.js 15 provides a spectrum of rendering strategies: Static Site Generation (SSG) for fully static pages, Incremental Static Regeneration (ISR) for updating static pages periodically, and now Partial Prerendering for a hybrid approach where some parts are static and others are dynamic. Using these wisely can give you the best of both worlds: fast, cached content plus freshness.
Static pre-rendering: If a pageâs content can be generated at build time (or even on a schedule), do it! A static page served from a CDN is about as fast as it gets (nearly zero TTFB). Next.js supports SSG via getStaticProps
. In the Next 15 App Router, you can achieve similar behavior with the fetch
API by using cache: 'force-cache'
(for truly static data) or revalidate
options for ISR. For example, an e-commerce product page could prerender product details statically and revalidate every hour, so most users get a cached page, and once an hour it updates with any changes. This drastically reduces the load on your servers and speeds up the response.
Partial Prerendering: A new concept in Next.js 15 is the ability to selectively prerender parts of a page. As the Next.js team describes it, âPartial Pre-rendering improves performance by selectively pre-rendering only essential parts of your page during build, while dynamic content loads progressively when needed.â. For instance, you might prerender a blog post's content (which changes rarely) but not prerender the comments section (which is dynamic); that dynamic part can load client-side or via SSR. The initial HTML contains the important stuff, giving a fast LCP, and the rest comes in after. This approach was shown to improve Core Web Vitals like TTFB and LCP in Next.js Conf demos. Essentially, users see the main content quickly, and any live-updating portions stream in.
To use partial prerendering in App Router, you can combine static and dynamic segments in your page. One method is using the Suspense
pattern as shown in a Next 14 exampledev.todev.to: render <StaticDashboardMetrics />
(no suspense, so prerendered) and wrap <UserActivity />
in <Suspense>
so itâs fetched at request time. The static parts are built once, and the dynamic part can be SSR or even client-side. This yields an instant static shell with dynamic data loading after boosting perceived performance significantly.
Caching on the server: Next.js 15 also gives fine-grained control with the new caches
. You can designate certain fetches to use force-cache
(always cache) or no-store
(always live) or timed revalidation. For example, fetch(url, { next: { revalidate: 60 } })
in a Server Component will cache that request for 60 seconds. Use this to cache API responses and avoid re-fetching on every request. Cached responses = faster responses.
Use a CDN for static assets and pages: Deployed on platforms like Vercel, your static pages (SSG) and public assets automatically get served via a global CDN. Ensure cache headers are set so that repeat visits are blazing fast (and consider using service workers or next/pwa
if you want aggressive client-side caching for repeat visits.
Example transformation: A content site with mostly article pages moved from SSR (rendering each request) to ISR with a 5-minute revalidate. Most users then got a static cached page from the nearest edge. The TTFB for those pages dropped from ~800ms (SSR from origin) to ~100ms (cached HTML), and LCP improved because the browser was getting HTML almost immediately. Even when data updates frequently, caching for even a short duration (like 60 seconds) can absorb a lot of traffic and speed up responses for the majority of users. Another example: a dashboard used partial prerendering â it prerendered the layout and static widgets, and SSRâd the user-specific data. The initial paint (with the static content) happened in ~1s, whereas before, the whole thing took ~2.5s to render fully. Users saw something useful very quickly and perceived it as faster, even though some data was still loading.
Bottom line: Donât generate content on the fly if you donât need to. Pre-generate it, cache it, and serve it like static whenever possible. Next.js gives you the tools to do this at a granular level (down to per-request or per-component caching). Use ISR for things that can be slightly stale. Use full SSG for truly static content. And with partial prerendering, carve out a static skeleton for dynamic pages. This reduces server load and massively improves scalability and performance. As one blog said about Next 15: partial prerendering delivers âinstant static content while dynamically loading personalized elements, significantly improving TTFB and LCP.â. Exactly what we want!
Monitor Performance Continuously with the Right Tools
You canât improve what you donât measure. To ensure your Next.js app remains lightning fast, you should continuously monitor performance metrics and catch regressions early. Thankfully, there are excellent tools for both lab and real-user measurements that you can integrate into your workflow.
Here are some recommended tools and how to use them:
Lighthouse CI: Googleâs Lighthouse (as in Chrome DevTools Audits or PageSpeed Insights) provides lab performance tests. Using Lighthouse CI in your continuous integration pipeline can automatically run performance audits on every build or pull request. Set up a budget (e.g., FCP under 2s, LCP under 2.5s, bundle size under X KB), and Lighthouse CI can fail the build if a change introduces a significant slowdown. This ensures performance is monitored just like tests are. Start with basic Lighthouse checks in CI and gradually add more custom metrics as your team gets comfortable. Over time, you can enforce stricter budgets â e.g., if a developer adds a heavy library, youâll know before it hits production.
Vercel Analytics: If youâre deploying on Vercel, their Analytics feature provides real user monitoring of Core Web Vitals. It inserts a tiny script to measure actual LCP, FID, etc. from your users and reports to a dashboard. The great thing is it gives you a Real Experience Score (RES) â an aggregate of your siteâs performance as experienced by real users. This lets you catch if maybe users on certain devices or regions are slow, or if a new release hurts performance in the field. Unlike lab tests, this is actual data from real sessions. You can use this in combination with Googleâs CrUX data or your own analytics.
Chrome DevTools & Performance Tab: For local profiling and deep dives, nothing beats running your app in Chrome (or Edge) and recording with the Performance tab. This shows you every task on the main thread, paint timings, script evaluations, etc. Itâs excellent for diagnosing why a certain interaction is slow or whatâs blocking the main thread. For example, the Performance tab can reveal a long task that causes a 300ms input delay. Our team often uses it to pinpoint exactly which function or component is a hotspot. As one CTO said, it went from intimidating to an indispensable tool once you learn to read the flame charts. Use it during development to fine-tune.
WebPageTest: This is a powerful tool for synthetic testing, especially for simulating slow networks or devices. You can run a test from various locations and throttle the connection (e.g., 3G or Slow 4G) to see how your site performs under less-than-ideal conditions. WebPageTest gives incredibly detailed waterfall charts, filmstrip views of rendering, and core vital measurements. Itâs great to test a production deployment and see, for example, how the LCP behaves, which resources are loading late, etc. Many teams use WebPageTest for spot checks or integration with performance budgets (thereâs an API and even GitHub actions).
Google Analytics (GA4) or custom telemetry: GA4 can track Web Vitals as well (with some custom code or plugins). Alternatively, there are specialized services (SpeedCurve, Calibre, DebugBear, etc.) that continuously monitor your siteâs performance from multiple regions. These can alert you if, say, LCP degrades beyond a threshold.
In the Next.js context, you can also use the built-in reportWebVitals
function. Next.js allows you to export a reportWebVitals(metric)
in your app, which will get called with each web vital (if you have Analytics disabled or want to send to your own endpoint). You could use this to send data to an analytics service of your choice.
Make performance monitoring a habit: Set up dashboards visible to the team, so everyone can see current perf scores. Perhaps have a weekly check-in on performance budgets. A culture of performance means issues get caught and fixed early. For example, if a code change accidentally adds 100KB to your bundle and slows down FID, a Lighthouse CI budget failure, or a jump in Vercel Analyticsâ RES will alert you, and you can address it before it impacts all userspagepro.covercel.com.
Case study: Our team introduced performance budgets in CI for a Next.js project. One day, a dependency update caused the bundle to grow unexpectedly, and Lighthouse CI flagged that the performance score dropped from 95 to 88. Investigating, we found the culprit (a misconfigured polyfill). We fixed it before merging to main. Without these tools, we might not have noticed until users complained or analytics showed a slowdown. That safety net is invaluable.
In summary, use lab tools (Lighthouse, WebPageTest, DevTools) to optimize in controlled environments, and field tools (Vercel Analytics, real-user metrics) to ensure real users are getting the experience you expect. Combine that with automation (CI checks, alerts) so you maintain your hard-earned optimizations over time. As the saying goes, âperformance is a journey, not a destination.â Continuous monitoring will keep you on the right track.
Build a Performance-First Mindset (Apply to all types of apps)
Our final tip is a bit more holistic: cultivate a performance-first mindset throughout your development process. Next.js gives you many tools, but itâs up to the team to use them effectively and prioritize performance from the start. This tip ties everything together and ensures that, whether youâre building an e-commerce site, a SaaS application, or a content hub, you consistently apply these optimizations as second nature.
What does a performance-first mindset look like?
Plan for performance from day one: When designing features, consider the performance implications. For example, if youâre adding a new image-heavy section, plan to use Next/Image and perhaps lazy load it. If youâre integrating a third-party script (analytics, ads, etc.), consider its cost and how to mitigate it (maybe load after user interaction or on idle).
Establish performance budgets & goals: Set concrete goals (e.g., LCP < 2s on median mobile, FID < 100ms, etc.). Having targets makes it easier to make decisions (you might decide against a fancy but heavy library if it would break the budget). Many teams include these in requirements, just like functionality.
Everyone on the team owns performance: Itâs not just for one âperformance engineerâ â developers, designers, and product managers all should value it. For instance, designers should know that huge background videos might hurt performance; developers should review each otherâs code for potential bloat or inefficiencies. As one company put it, they created a performance-first culture where performance is a shared responsibility and part of the definition of done.
Regularly audit and learn: The web evolves, and so do best practices. Make time for periodic performance audits of your Next.js app. This could be as simple as scheduling a monthly deep dive where you profile the app and see if any regressions or new opportunities have arisen. Also, encourage team knowledge sharing â if someone learned a new trick (like a new Next.js feature or React optimization), share it with everyone.
Use the latest Next.js features: Next.js 15+ is introducing things like stable Turbopack, enhanced React 19 features, etc., which often come with perf benefits. Keep your Next.js version up to date (within reason) and read release notes for anything that could help performance. For example, if Next 15.2 announces an improved
<Image>
or better hydration technique, consider adopting it.
A quick example of applying this mindset: Letâs say youâre tasked with building a new pricing page for a SaaS app. With a performance-first approach, you would: optimize all images (maybe use SVG for logos, Next/Image for others), ensure the page is static (SSG) since pricing doesnât change often, maybe use partial hydration if thereâs a dynamic calculator widget (so the static content loads immediately), test it on slow network to ensure itâs under budget, and perhaps set up a Lighthouse CI threshold from the start for it. The result is a page that not only looks good but is technically optimized from day one, requiring no retroactive fixes.
Teams that do this find that performance isnât an afterthought or a one-off project â itâs just part of building the app. When new team members join, they see that pull requests include discussions about bundle size or using the correct Next.js features, and they adopt the same approach.
Finally, remember that performance benefits everyone: it improves accessibility (fast sites work better on low-end devices), it pleases users (nobody ever said âI love how slow this site is!â), and it drives business metrics (better SEO, more engagement). So itâs absolutely worth the investment.
Conclusion
In this article, we have covered several advanced Next.js concepts. By mastering these concepts, you will be able to build powerful, performant web applications with Next.js. Whether you are building a small blog or a large-scale e-commerce platform, Next.js has the tools and features you need to deliver a seamless user experience.
References
https://www.freecodecamp.org/news/nextjs-vs-react-differences/
https://javascript.plainenglish.io/next-js-client-side-rendering-56a3cae65148
https://blog.devgenius.io/advanced-next-js-concepts-8439a8752597
https://blog.stackademic.com/you-must-use-middleware-like-this-in-next-js-64d59bb4cd59
https://yohanlb.hashnode.dev/when-should-i-use-server-action-nextjs-14?ref=dailydev
https://blog.devgenius.io/10-powerful-next-js-optimization-tips-f78288d284e1
https://javascript.plainenglish.io/next-js-hates-me-until-i-made-it-10x-faster-cae6d1b65876
https://medium.com/yopeso/a-year-with-next-js-server-actions-lessons-learned-93ef7b518c73
https://medium.com/gitconnected/when-to-use-react-query-with-next-js-server-components-f5d10193cd0a
https://levelup.gitconnected.com/nextjs-image-performance-issues-and-fixes-40db2061ffe1
https://blog.stackademic.com/what-use-client-really-does-in-react-and-next-js-1c0f9651c4e1
https://medium.com/@sureshdotariya/next-js-15-app-router-architecture-and-sequence-flow-3a6ffdd2445c
Subscribe to my newsletter
Read articles from Tuan Tran Van directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Tuan Tran Van
Tuan Tran Van
I am a developer creating open-source projects and writing about web development, side projects, and productivity.