The Six-Step Path from .js File to Machine Code

Writing JavaScript is like handing the browser a novel written in characters the CPU can’t read. The engine therefore marches your file through a set of stages, each doing a small, quick job so the page shows up fast and ends up running fast once it’s been open for a while. The modern JS runtime environment does something called JIT compilation.
🧠 First, what’s “interpreted” vs “compiled”?
When people say a language is interpreted, they mean the code is run line by line, often with the help of a program called an interpreter that translates and executes it on the fly.
In contrast, a compiled language(AOT aka ahead of time compilation) like C++ or Rust is turned into machine code before it’s run—using a compiler—which makes execution super fast, but adds a delay during startup.
So which one is JavaScript?
✅ Both! Modern JavaScript engines (like Chrome’s V8) start by interpreting your code and then compile (JIT compilation) the hot parts into fast machine code behind the scenes.
That’s why we need a pipeline—a multi-step system that helps JS load quickly and run fast.
The six steps
# | Stage | What really happens |
1 | Lexing | The scanner walks the source code character-by-character and slices it into tokens—little labels like “identifier”, “number”, or “{”. Think of it as adding invisible commas so the next stage can see the words. Efficient lexers are tiny state-machines hand-tuned in C++ so they can tear through megabytes in milliseconds. |
2 | Parsing → AST | A parser checks that the token stream follows the JavaScript grammar and organizes it into an Abstract Syntax Tree (AST). Each node (“FunctionDeclaration”, “IfStatement”) is an object in memory, making later passes just tree-walking algorithms rather than string hacks. Modern parsers are incremental—as soon as one function body is valid, it can already move on to the next stage instead of waiting for the whole file. |
3 | Bytecode | The AST is converted into a compact, engine-specific instruction set (e.g. Ignition bytecode in V8). Bytecode is shorter than machine code and completely platform-independent, which means smaller downloads on 64-bit devices and identical behaviour across ARM and x86. |
4 | Interpreter | A tight C++ loop (“fetch → decode → run”) executes the bytecode immediately. This gets pixels on screen while the browser is still parsing the rest of the script tag. While it runs, it quietly stores feedback about types and call counts in each function. |
5 | Baseline JIT | When a function crosses a simple “hot-enough” threshold (say, 1 000 calls), the engine turns its bytecode into rough-and-ready machine code. |
6 | Optimising JIT | If the same function keeps running, the optimiser (TurboFan, Warp, DFG—name depends on engine) builds a richer internal representation, applies heuristics like inlining, constant folding, and escape analysis, and spits out highly tuned machine code. |
That’s it!
All modern engines follow this multi-tier recipe (names differ, ideas stay the same).
Subscribe to my newsletter
Read articles from pushpesh kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
