How React Native Really Works Behind the Scenes

Aman GuptaAman Gupta
10 min read

When you write a React Native app, you’re writing JavaScript that looks like React for the web — with useState, useEffect, and components like <View> or <Text>. But under the hood, it’s a completely different world.

Unlike ReactJS, which runs in the browser and outputs to the DOM, React Native runs in a mobile runtime and outputs to native UI elements — like UIView on iOS or android.view.View on Android. These are actual buttons and layouts the OS understands, not web equivalents.

So how does your JavaScript become a native app on your phone?


The Three-Layer System: JS, Bridge, Native

Let’s go layer-by-layer and understand what’s happening.


1. JavaScript Layer (Where Your Code Lives)

Imagine this like your office desk. You sit at your desk writing instructions on a notepad (your React Native components and logic). For example, you say: “Hey, I want a button that says 'Press me' and does something when tapped.”

This layer:

  • Runs in a JavaScript engine — like Hermes, JavaScriptCore, or V8.

  • Has your React code, the React reconciler, and state logic (useState, useEffect, etc.).

  • Builds a virtual component tree, which is an abstract representation of what your UI should look like.

But you can’t talk directly to your phone’s native UI from this layer. That would be like trying to control your TV by shouting at it. You need a remote control.

That remote control is the bridge.


2. Bridge Layer (The Translator & Messenger Old Approach)

Think of the bridge as a translator sitting between you and your TV. You (JS) speak English. The TV (native code) speaks Chinese. You write your instructions in English, hand it to the translator, and the translator speaks to the TV in Chinese.

Technically, here’s what’s happening:

  • Your JS code runs and says: “I want a View with a Text inside it.”

  • React's reconciler generates a JSON message: { type: 'createView', viewType: 'Text', props: { text: 'Hello' } }.

  • This message is sent through the bridge.

Now, the bridge is asynchronous and batched. That means it collects several such messages, serializes them into strings (like a list of instructions), and sends them all at once to the native world.

This is efficient, but there’s a catch: it’s slow for high-frequency operations like animations, because it can’t happen in real-time. The communication between JS and native has a cost — it’s like sending a letter instead of having a live call.


3. Native Layer (Where Real UI is Born)

This is your TV’s internal system. It receives the translated instructions and acts on them.

When the native layer receives:

{ type: 'createView', viewType: 'Text', props: { text: 'Hello' } }

It says, “Okay, I’ll create a UILabel (iOS) or a TextView (Android) with the text 'Hello'.” This is not a fake HTML element. This is a real, OS-native element — as real as any button you see in a stock iOS or Android app.

These views are laid out using native layout engines (like Flexbox-based engines), and they appear just like any native app.

When a user taps a button, the event goes back through the bridge into JS: “Hey, someone clicked the button!” Then your handler function in JS is executed.


The Whole Flow in Simple Analogy

Imagine this flow like:

  • You = JavaScript code.

  • You write instructions = React components.

  • You hand instructions to your assistant (Bridge).

  • Assistant goes to your factory (Native layer).

  • Factory builds the actual machines and buttons (Native views).

  • Customer (user) pushes a button.

  • Factory tells the assistant.

  • Assistant runs back to you: “He pressed the button!”

But There’s a Problem (Old Architecture)

That assistant — the bridge — is not fast enough.

  • He can only deliver messages in batches.

  • He needs to translate (serialize to JSON).

  • He can’t work in real-time.

  • If the assistant gets too many messages, your factory (app UI) lags.


Enter the New World: Fabric + JSI

To solve the limitations of the old assistant (bridge), React Native introduced a new system. It’s like upgrading from a human assistant to a direct intercom line with your factory.

Fabric Renderer:

Now, instead of your assistant building a separate representation of your UI and sending it over, the native layer itself understands React’s tree. The Shadow Tree (which used to exist only in JS) is now also present on the native side. So layout is computed more efficiently.

JSI (JavaScript Interface):

No more JSON letters. Now, your JS code can directly call native functions using C++ bindings. This is like removing the assistant and installing a phone line — you talk directly to the factory manager.

TurboModules:

These are native modules (like Camera, Geolocation, etc.) that are now initialized lazily and called directly from JS via JSI. This improves memory usage and startup time.


How It’s Different from ReactJS

ReactJS is like a browser-based web worker.

  • It renders to the browser's DOM.

  • All UI updates are in JavaScript.

  • Uses HTML, CSS, and browser APIs.

  • ReactDOM tells the browser: “Create a <div>”, “Update a <button>.”

The browser already knows how to render these things.

React Native doesn't use a browser. There’s no HTML or DOM. Your instructions are interpreted by native code, which renders actual Android/iOS views — so the experience is much closer to native apps, especially with animations and gestures.


Final Takeaway

React Native’s magic lies in separating the declaration of UI (JSX + React) from the execution of UI (native views) — and it connects the two with a bridge (or with JSI in modern versions).

So when you write:

<View style={{ padding: 10 }}>
  <Text>Hello World</Text>
</View>

You’re not writing HTML. You’re writing an instruction set for native code to build real views, like:

  • UIView on iOS

  • LinearLayout or TextView on Android

And now, with JSI, you’re doing it much more directly and efficiently — just like native developers, but with JavaScript.


Let’s go fully detailed, a continuous deep explanation like a technical walkthrough. We’ll explore how the Bridge Layer works internally, what languages are used, and how the new architecture (Fabric + JSI) improves it.

The Bridge Layer — the Original Heartbeat of React Native

In the original version of React Native, the app consists of two separate worlds: one is the JavaScript world where your code lives, and the other is the native world — made of platform-specific UI and logic (Android’s Java/Kotlin or iOS’s Objective-C/Swift). These two worlds can't talk directly. They're separated, like two departments in a company that speak different languages.

To make them communicate, React Native introduced a “Bridge.” Think of this as a translator who takes messages written in one language, translates them into another, and then carries them across a hallway to the other department. This bridge runs in C++, with hooks into Java (for Android) and Objective-C (for iOS). On the JavaScript side, it uses a scheduler and JSON-based messaging to communicate.

When you write something like a <View> or <Button> in JSX, React’s reconciler processes this into a data structure — a sort of internal blueprint of the UI. This structure isn't visual yet. Instead, it’s like a list of instructions. These instructions are batched up — like “create a view,” “set background color,” “position at x and y” — and encoded as JSON messages. These messages are then sent across the bridge.

The bridge, being asynchronous, doesn’t deliver messages immediately. It queues them, batches them together, serializes them into a string (like writing a long message), and sends it across to the native side in one go. On the other side, the native modules receive this JSON, parse it, and then call the appropriate native UI functions — for example, to create a UIView on iOS or a LinearLayout on Android.

The process works, but it's not fast. It introduces latency because every message must be stringified, sent over a thread boundary, deserialized, and then executed. It also limits interactivity, like complex animations or gestures, because everything goes through this bottlenecked pipe.


Why the Bridge Became a Bottleneck

This original design has a key flaw: it’s asynchronous and serialized. That means when your JS code wants to create or update views, it can’t directly access native APIs — it has to “send a letter” through the bridge. This takes time, especially for frequent operations like animations or rapid UI updates.

It’s also not memory efficient. Since each message has to be converted into a JSON object, sent as a string, and then parsed on the other side, it creates overhead in both performance and memory usage.

This is why React Native apps often felt "janky" during fast UI updates or animations — even though they were rendering native views, the control path was slow and indirect.


Fabric + JSI — A New Brain for React Native

To overcome these limitations, Meta (Facebook) rearchitected React Native with a new system: Fabric and JSI.

Let’s first talk about JSI — JavaScript Interface.

JSI is a C++ layer that allows JavaScript code to talk directly to C++ native modules, without going through the old serialized JSON bridge. It’s like removing the translator and giving you a direct phone line to the other department — no more passing letters, no more delayed responses.

JSI is embedded into the JavaScript engine — such as Hermes (also built in C++). When your JS code runs, it doesn’t have to serialize commands into JSON. Instead, it can call a C++ function directly. That C++ function may then call into Objective-C or Java, depending on the platform. But the biggest gain is: it's synchronous, faster, and more memory-efficient.

This also changes how native modules are loaded. In the old bridge system, all native modules were loaded eagerly at startup — even if you never used the camera, its module was loaded. With JSI, React Native introduces TurboModules, which are loaded lazily and invoked on-demand, again via C++ interfaces.

Now let’s explore Fabric.

Fabric replaces the old UI rendering engine. Previously, React would build a shadow tree on the JS side and send layout instructions to native. Now, Fabric builds and owns the shadow tree on the native side. It does layout calculations in C++, and React's renderer writes directly into this structure.

The result is faster layout, more predictable updates, and the ability to coordinate complex gestures, animations, and UI updates — all without the performance hit of crossing the JS-native boundary multiple times.

In the new model, React's reconciler creates a tree. That tree is rendered not through a JSON message to the bridge, but via direct calls into C++ objects via JSI. These objects represent nodes in the native UI — managed by Fabric. Layout is computed natively. When the user interacts with the UI, events are delivered to JS using the same direct interface.


What Languages Power These Layers?

Let’s break down the language internals of each component in this new architecture:

  • JSI (JavaScript Interface): Written in C++. This is the core layer allowing JavaScript to call native code and vice versa. It’s embedded into the JS engine (Hermes or JSC).

  • Hermes Engine: JavaScript engine optimized for mobile, built in C++.

  • Fabric Renderer: Also C++, with bindings into Java and Objective-C for accessing platform-specific UI APIs.

  • TurboModules: Modules are defined in JavaScript (with TypeScript type definitions) but their native implementations are in Java (Android) or Objective-C/Swift (iOS). The codegen system automatically creates C++ bindings.

  • Native Platform APIs: Java (Android), Objective-C/Swift (iOS).


Summary in One Story

Imagine the old architecture as a person writing down orders on a notepad, putting them in an envelope, and sending them to a warehouse across town. The warehouse receives the orders, opens the envelope, and does the work — like building shelves or making buttons. This system works, but it’s slow, error-prone, and hard to scale.

Now imagine the new architecture. You have a walkie-talkie (JSI) connected directly to the warehouse. You don’t send letters anymore — you just press the button and speak: “Build a red button, now!” and they do it instantly. Fabric makes the warehouse smarter — it remembers what you already asked for, lays things out in real time, and responds instantly to touch or animation.

It’s the same React code on the outside — but under the hood, it’s gone from snail mail to real-time streaming.


Not just it , we will cover JSI , Codegen, React Reconciliation , Fabric , Shadow Thread + Yoga (Layout Phase) , Fabric Mounts Views with Granular Rendering, TurboModules Accessed via JSI ,
In next post and how these are connected in depth so stay tuned and follow.

0
Subscribe to my newsletter

Read articles from Aman Gupta directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aman Gupta
Aman Gupta

I am a passionate Frontend Developer who thrives on turning complex ideas into compelling, scalable, and user-centric digital experiences. Over time, I’ve successfully developed a diverse range of projects, including: My technical expertise spans JavaScript, TypeScript, React, Redux, Node.js, Express.js, Next.js, HTML, CSS, and Git, allowing me to craft efficient, responsive, and visually appealing applications. I am driven by continuous learning and adapt as technology evolves. Working within agile teams has honed my communication and collaboration skills, ensuring that I can translate requirements into robust solutions. Whether it’s building a production-ready web applications with robust AI integrations, delivering intelligent solutions that adapt to evolving market needs and end-user expectations, integrating secure payments, or optimizing code architecture and performance, I approach each project with a commitment to innovation, scalability, and user satisfaction. Let’s work together to create digital solutions that solve real-world challenges, delight end-users, and drive meaningful results!