Inside the Machine: Deconstructing Addition at the Binary Level

Nitin SainiNitin Saini
7 min read

We think we are creating the system for our own purposes. We believe we are making it in our own image... But the computer is not really like us. It is a projection of a very slim part of ourselves: that portion devoted to logic, order, rule, and clarity.

Ellen Ullman, Close to the Machine: Technophilia and Its Discontents

A programming language is an artificially constructed language used to instruct computers. At one point, language-based interfaces, such as the BASIC and DOS prompts of the 1980s and 1990s, were the main method of interacting with computers later replaced by visual interfaces.

But how does that human-readable code become something a machine truly understands? The journey from the clear, logical instructions we write to the raw 0s and 1s the computer processes is a fundamental concept in computing.

The Code's Transformation: From Your Keyboard to the CPU's Language

When you write a program, whether in a high-level language like C++, Python, or Java, you're creating source code; a set of instructions designed for human comprehension. However, your computer's Central Processing Unit (CPU) doesn't speak C++ directly. It operates on machine code, which is precisely that sequence of binary 0s and 1s.

This is where specialized software tools come into play:

  1. The Compiler (or Interpreter/Assembler): Think of this as the master translator. If you write in C++ (a compiled language), a C++ compiler takes your entire source code and converts it into machine code. For languages like Python (interpreted), an interpreter translates and executes line by line. For lower-level assembly language, an assembler performs this conversion. The output of this translation is typically an executable file, which is packed with binary instructions.

  2. Loading into Memory (RAM): When you launch a program, your computer's Operating System (OS)'s loader reads these binary instructions from the executable file stored on your hard drive. It then copies them into the computer's Random Access Memory (RAM). RAM acts as the CPU's quick-access workspace, holding the program's instructions and data while it's running.

  3. Execution by the CPU: With the binary instructions loaded into RAM, the CPU takes over. It constantly performs a Fetch-Decode-Execute cycle:

    • Fetch: The CPU retrieves the next binary instruction from RAM.

    • Decode: It interprets the instruction, figuring out what operation needs to be performed (e.g., add, load, jump) and what data or registers are involved.

    • Execute: The CPU performs the actual operation, manipulating data within its internal registers or interacting with memory. This cycle repeats billions of times per second, bringing your program to life.

This entire process ensures that even the most complex software eventually breaks down into these fundamental binary commands, which the CPU can execute directly.

Ever wondered what a computer program looks like at its most fundamental level? Imagine stripping away all the fancy graphics, user interfaces, and human-readable text. What's left is a raw, binary language, a series of 0s and 1s that directly instructs the computer's brain. That's exactly what we stumbled upon: a fascinating snippet of machine code, whispering the secret of how a computer calculates the sum of numbers from 1 to 10.

At first glance, it might look like a jumble of seemingly random binary sequences:

00110001 00000000 00000000
00110001 00000001 00000001
00110011 00000001 00000010
01010001 00001011 00000010
00100010 00000010 00001000
01000011 00000001 00000000
01000001 00000001 00000001
00010000 00000010 00000000
01100010 00000000 00000000

Though it looks confusing at first glance, like some mysterious ancient script, it's not nonsense; it has meaning if you know how to read it. Each line is a 24-bit instruction, a direct command to a hypothetical, very simple computer processor.

So, if we convert them to decimal. We’ll come across something like this:

Binary code to Decimal conversion

You might be wondering, what’s the point of converting binary to decimal when we already know the result? That’s exactly where things get interesting. The real challenge lies in reverse-engineering the pattern by closely examining the bytecode’s behavior and structure, much like analyzing assembly or machine code.

Given the information that this code is meant to sum numbers from 1 to 10, we can begin unraveling its elegant logic. So, what these decimal triplets meant here?

Instructions and Registers

Think of it as three distinct parts:

[OPCODE] [OPERAND1] [OPERAND2]
Each line is 3 bytes (24 bits): 8 bits per instruction part.
  1. The Opcode (Operation Code): The first 8 bits tell the computer what to do such as "add," "load," "jump," or "stop."

  2. Operand 1: The next 8 bits usually point to a register (a tiny, super-fast storage spot inside the CPU) or a direct numerical value.

  3. Operand 2: The final 8 bits might specify another register, a memory address, or another numerical value.

The symbolic instructions defined for each binary triplets are shown in attached picture below.

Before doing a dry run, let’s map the registers.

Memory Map (Registers)

R[0]  =  memory[0]      which represents total
R[1]  =  memory[1]      Current count (1 to 10)
R[2]  =  memory[2]      Temp/comparison register
R[11] =  memory[11]      Constant limit = 11

Initial Memory:

memory[0] = 0      // R0 = total
memory[1] = 0      // R1 = count
memory[2] = 0      // R2 = temp
memory[11] = 11    // R11 = loop limit
  1. Iteration 1:
MOV 0, 0 → memory[0] = 0
MOV 1, 1 → memory[1] = 1
MOV 2, 1 → memory[2] = 1
CMP 11, 2 → 11 - 1 = 10 ≠ 0 → continue
JG not taken
ADD 0, 1 → memory[0] = 0 + 1 = 1
ADD 1, 1 → memory[1] = 1 + 1 = 2
JMP 2
  1. Iteration 2:
MOV 2, 1 → memory[2] = 2
CMP 11, 2 → 11 - 2 = 9 ≠ 0
JG not taken
ADD 0, 1 → memory[0] = 1 + 2 = 3
ADD 1, 1 → memory[1] = 2 + 1 = 3
JMP 2

And we’ll continue the iteration.

  1. Iteration 10:
memory[1] = 10
MOV 2, 1 → memory[2] = 10
CMP 11, 2 → 11 - 10 = 1 ≠ 0
ADD 0, 1 → memory[0] += 10 = 45
ADD 1, 1 → memory[1] = 11
JMP 2
  1. Iteration 11:
memory[1] = 11
MOV 2, 1 → memory[2] = 11
CMP 11, 2 → 11 - 11 = 0 (condition will become true)
JG taken → jump to instruction 8
OUT 0 → print memory[0] = 55

So, the final output will be 55.

Each line of the previous program contains a single instruction. It could be written in English like this:

  1. Store the number 0 in memory location 0.

  2. Store the number 1 in memory location 1.

  3. Store the value of memory location 1 in memory location 2.

  4. Subtract the number 11 from the value in memory location 2.

  5. If the value in memory location 2 is the number 0, continue with instruction 9.

  6. Add the value of memory location 1 to memory location 0.

  7. Add the number 1 to the value of memory location 1.

  8. Continue with instruction 3.

  9. Output the value of memory location 0.

Loop in High Level terms(in C++):

int sum = 0;
for (int i = 1; i <= 10; ++i) {
    sum += i;
}
cout << sum; // 55

Here's the human-readable assembly code that will be assembled into bytecode and that will be run on your custom C++ virtual machine.

MOV total, 0
MOV count, 1
LOOP:
  MOV compare, count
  SUB compare, 11
  JG compare, END
  ADD total, count
  INC count
  JMP LOOP
END:
  PRINT total
HALT

Wrap Up

What this binary sequence beautifully illustrates is the concept of abstraction in computing. While we, as humans, think of "summing numbers from 1 to 10," the computer sees only a series of highly specific, low-level commands to manipulate binary values in its registers and memory.

This simple program, despite its cryptic appearance, is a testament to the elegant logic that underpins all digital computation. Every app, every website, every complex calculation you perform on your device ultimately boils down to sequences of 0s and 1s, much like these, meticulously orchestrated to achieve a desired outcome. It's a hidden language, but one that drives the entire digital world we inhabit.

0
Subscribe to my newsletter

Read articles from Nitin Saini directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nitin Saini
Nitin Saini

A Full Stack Web Developer, possessing a strong command of React.js, Node.js, Express.js, MongoDB, and AWS, alongside Next.js, Redux, and modern JavaScript (ES6+)