The Magic Behind the Android Calculator


The Soul of a New Calculator: Why 10¹⁰⁰ + 1 - 10¹⁰⁰ = 0 in iOS, and How Google Fixed It
In the digital world, we often take the correctness of our tools for granted. We trust our computers to compute, and for the most part, they do a phenomenal job. But sometimes, in the silent, humming logic of their processors, they get things subtly, yet profoundly, wrong.
Consider this seemingly trivial calculation:
$$10^{100} + 1 - 10 ^ {100}$$
The answer, to any human with a basic grasp of arithmetic, is obviously 1. You have an unimaginably large number, you add one to it, and then you take the original large number away. What remains is one.
Now, try this on many calculators, including the iOS calculator for a very long time. The answer you'll get is 0.
This isn't a simple bug. It's a symptom of a deep, fundamental challenge in computer science: representing the infinite, messy world of real numbers within the finite constraints of a machine. This single, flawed equation opens a rabbit hole that leads us to the very heart of how computers handle numbers, and how Google's engineering team re-imagined the calculator to be fundamentally more honest about its results.
The Tyranny of the Floating Point: A Deal with the Devil
The reason for the error above lies in a concept called floating-point arithmetic. It's the standard way computers have represented and handled decimal numbers for decades (codified in the IEEE 754 standard).
A floating-point number is stored in a way similar to scientific notation. It has two main parts:
The Significand (or Mantissa): This holds the significant digits of the number. Think of it as the
6.022
in \(6.022 * 10 ^ {23}\)(avogadro’s number btw).The Exponent: This determines the position of the decimal point, representing the magnitude of the number. It's the
23
in \(6.022 * 10 ^ {23}\).
The critical constraint is that both the significand and the exponent have a fixed number of bits to store their information. This is where the deal with the devil is made: in exchange for being able to represent an enormous range of numbers, we sacrifice precision.
Let's go back to 10^100
. This is a 1 followed by 100 zeros. A standard double-precision float has about 15-17 decimal digits of precision in its significand. To store 10^100
, the calculator essentially stores something like 1.000000000000000 x 10^100
. All the available slots in the significand are used up just to say "this is a one followed by a bunch of zeros".
Now, what happens when we add 1
? We are trying to perform this operation:
(1.000000000000000 x 10^100) + 1
What we really want to do is match the exponent of 10^100
and 1
which would be like a decimal point followed by a truck ton of zeroes and then a 1
(0.00000……01)
multiplied by 10^100
.The 1
is so infinitesimally small compared to 10^100
that there is no room in the significand to register its existence. It's a catastrophic rounding error. Imagine trying to measure the height of Mount Everest with a ruler marked only in kilometers. If you add a single brick to the summit, the measured height wouldn’t even remotely register the miniscule change that is added to the height. The tiny change is completely lost.
So, the calculator computes 10^100 + 1
and gets 10^100
back. The subsequent subtraction, 10^100 - 10^100
, naturally results in 0.
Google's Solution: The "Recursive Real" Promise
The Android team, led by engineer Hans-J. Boehm, saw this not as an unavoidable limitation but as a problem to be solved. They implemented a system based on recursive real arithmetic.
The core idea is revolutionary. Instead of storing a number as a single, potentially inaccurate, floating-point approximation, the calculator represents a number as a promise. This promise is essentially an algorithm or a function that can provide an approximation of the number to any desired level of precision.
This means the number on your calculator screen on Android is not just a static number that is just calculated when you give the input, the more precision you ask of it by sliding right to get more significant digits, the more it keeps on computing using an ingenious function.
Let's call this function Approximate(n)
.
When you ask this function for a number x
with precision n
, it guarantees to return a simple, perfectly representable number d
such that the true value of x
is trapped in an interval around d
. More formally, it guarantees that \(|x-d| \leq \frac{1}{2^n}\).
The key is that you can always ask for more precision. If n=10
isn't good enough, you can ask for n=20
, and the function will run its algorithm for a bit longer and give you a much better approximation. The number isn't a static value, it's a dynamic process, a computation waiting to happen. I bet you didn’t think it was even happening in real time :)
But what are these "perfectly representable" numbers it returns? This is where the elegant beauty of dyadic rationals comes in.
Consider the equation \(|x - d| \leq \frac{1}{2^n}\), the d
here, refers to the “dyadic rational number”. A constant a
in the numerator with 2^n
in the denominator. \(d = \frac{a}{2^n}\), the interesting thing to notice here, is that the interval error say \(\epsilon = \frac{1}{2^n}\)and d
, both share the common denominator of the exact same precision, that is \(\frac{1}{2^n}\)and with increasing n
, we can keep increasing the precision of our computation, the only thing left here to determine if you have been following closely is the constant numerator a
of d
. We will pick this up in a bit!
One thing we have established by now, Apple was using the Floating Point Arithmetic with a ginormous range btw like i mentioned in the start, but Google is taking the approach of using Rational Numbers, and considering that so much of the math is contained by irrational numbers(\(\pi, \sqrt{arg}, e \, \, etc)\), you must be thinking well how do you suppose we handle the irrationals using Rational Numbers ?
Well think about it, irrationals are just ir - rationals after all ? They’re just rationals which didn’t seem to have closure, an end , right ? So isn’t it natural that if we want more precision of an irrational we just compute it to a better precision ?
And how do we do that ? When we detect the user is wanting to see more precision digits, we get to work, we start computing more and more, and all of this is made possible by using the Recursive Real Arithmetic and a really clever function which helps us piece it all together.
On the left is the Android Calculator and on the right is the iOS Calculator :)
Dyadic Rationals: The Bedrock of Digital Precision
A dyadic rational is a fraction whose denominator is a power of two.
Examples:
1/2
(denominator is2^1
)3/4
(denominator is2^2
)5/8
(denominator is2^3
)13/16
(denominator is2^4
)
These numbers are the native tongue of computers. Because computers are built on a base-2 (binary) system, any fraction with a denominator that is a power of 2 has a finite, exact representation in binary. They are to a computer what numbers like 0.1
(1/10) and 0.01
(1/100) are to us in our base-10 world. They are clean, precise, and unambiguous.
So, how does the calculator use them?
The recursive real algorithm uses these dyadic rationals to relentlessly corner a real number within an ever-shrinking interval. Let's walk through the unanswered questions from before. How does it approximate 1/3
?
The First Approximation: The calculator needs to find a starting interval. It's easy to determine that
0 < 1/3 < 1
, i.e the outermost interval from \([0, 2^0]\), then we zoom in further to more precision, Midpoint of[0, 1]
is(0 + 1) / 2 = 1/2
.Now it compares our target,
1/3
, with this midpoint,1/2
. Is1/3
larger or smaller than1/2
? We cross multiply :1 * 2
versus3 * 1
, Since2 < 3
, the RHS is bigger that is1/2
is bigger, so our new interval becomes[0, 1/2]
.How does the calculator determine the next step?
Repeat the above step yourself and you’ll soon find that the next interval would be
1/4 < 1/3 < 1/2
. (A quick check:1/4 = 0.25
,1/3 = 0.333...
,1/2 = 0.5
). So, the "promise" is that the number1/3
lives somewhere in the interval[1/4, 1/2]
. The width of this interval is1/4
.It bisects the interval. It calculates the midpoint, which will always be another dyadic rational (get it where the recursive nature comes from ?)
Midpoint of
[1/4, 1/2]
is(1/4 + 1/2) / 2 = (3/8)
.Now it compares our target,
1/3
, with this midpoint,3/8
. Is1/3
smaller or larger than3/8
? To check, we can cross-multiply:1 * 8
versus3 * 3
,8
versus9
Since8 < 9
, we know that1/3 < 3/8
.This tells the calculator that
1/3
must be in the lower half of the previous interval. Our new, improved interval is[1/4, 3/8]
.
Refining the Approximation: Let's do it again. The new interval is
[1/4, 3/8]
.Midpoint of
[1/4, 3/8]
is(1/4 + 3/8) / 2 = (5/16)
.Now, compare
1/3
with5/16
. Cross-multiply:1 * 16
versus3 * 5
,16
versus15
Since16 > 15
, we know that1/3 > 5/16
.This means
1/3
is in the upper half of the[1/4, 3/8]
interval. So our new, even truer approximation is the interval[5/16, 3/8]
. To put them on a common denominator, this is[5/16, 6/16]
.
Why is [5/16, 6/16]
a "truer" approximation?
Because the interval is smaller. Our first interval, [1/4, 1/2]
, had a width of 1/4
. Our new interval, [5/16, 6/16]
, has a width of only 1/16
. We have "trapped" the true value of 1/3
in a space that is four times smaller. The recursive algorithm can repeat this process indefinitely, bisecting the interval again and again, generating an endless stream of progressively more accurate dyadic rational bounds for the true number.
The Engine Room: A Function for Finding the Numerator
We've established that Google's calculator represents any number x
as a "promise" : an algorithm that can find a dyadic rational \(d = \frac{a}{2^n}\) which is guaranteed to be incredibly close to x
(or well, as much as the user wants it to be). The core challenge of this entire system boils down to one critical task: how do you reliably determine the integer numerator, a
, for any given precision n
?
Simply calculating an approximation of x
, multiplying it by 2^n
, and rounding to the nearest integer is a recipe for disaster.
For example, if we want to approximate x = 0.76
with a denominator of 4
(or precision of n=2
), we multiply: 0.76 * 4 = 3.04
Rounding this result gives us the integer numerator a = 3
, this might seem tempting because of the ease of computation but the tiny, inherent errors in standard floating-point math could easily place the result perilously close to a .5
boundary, making a rounding decision a 50/50 guess. Getting it wrong would violate the fundamental promise of the entire system.
To solve this, the system uses a clever internal helper function. Think of it as the master machinist in the engine room, tasked with crafting that numerator a
with unwavering reliability. This function's secret weapon is to work with more precision than is strictly required. Feel the intuition like planning 2 steps ahead of your next move so that you know what to do if things go south!
The 2-Bit Safety Net
The core principle is this: To confidently deliver a result with n bits of precision, first calculate an intermediate result with n+2 bits of precision.
Why n+2
? Let's break down what that means.
In binary, each additional bit of precision you use increases the precision(granularity) of your result. For fixed-point or integer representations, this can be interpreted as halving the quantization error.
Adding 1 bit (
n+1
) makes your calculation 2 times more precise.Adding 2 bits (
n+2
) makes your calculation 4 times more precise.
This is the magic number. By computing with 4x the required precision, the function creates a "safety net" for itself. It effectively scales down the potential rounding error by a factor of 4, making it far too small to cause an incorrect rounding decision for the final numerator.
Our original equation was \(|x - \frac{a}{2^n}| \leq \frac{1}{2^n}\).Let's walk through how this function determines a
.
The Goal: We need to find an integer
a
for our dyadic rationald = a/2^n
.The Over-Precision Step: Instead of calculating
x * 2^n
, the function calculatesx * 2^(n+2)
. It asks for a standard floating-point approximation of this much more precise value.The Division and Rounding: The result from step 2 is then divided by
2^2
(which is 4) to bring it back to the correct scale for then
-bit precision level. Let's call this resulty
.The Confident Decision: The function then simply rounds
y
to the nearest integer. This integer is our numerator,a
.
To sum it up into a generalized equation, If we have 2 numbers we want to add let’s say x
and y
and call the function that approximates the value of a
as \(f_x(n)\)with x
being the number, n
being the precision, we can frame the following equation :
$$f_{x+y}(n) = \frac{Round(f_x(n + 2) + f_y(n +2))}{4}$$
A Concrete Example
Imagine we are trying to find the dyadic rational approximation for x = 1/3
with a precision of n=4
. Our goal is to find an integer a
such that our approximation is \(d =\frac{a}{2^4}=\frac{a}{16}\).
The Naive (Risky) Approach:
Calculate
x * 2^4 = (1/3) * 16 = 5.3333...
A standard floating-point calculation might return
5.333333333333333
. This looks safe to round down to5
.But what if the number was much closer to a boundary, and the tiny floating-point error was just enough to push it to one side? We can't be certain.
The Robust N+2 Approach:
The Goal: Find
a
forn=4
.Over-Precision Calculation: The function calculates with
n+2 = 6
bits of precision. It computes:\(x*2^{4+2}=\frac{1}{3}*2^6=21.3333...\)
A floating-point unit would give a hardware-level approximation of this, say
21.333333333
.Scale Down: Now, the function takes this highly precise result and divides it by
2^2 = 4
:\(y=\frac{21.333333333}{4}=5.33333333325\).
The Confident Round: The final step is to round
y
to the nearest integer. It's unequivocally clear that5.33333333325
rounds to5
.
Our numerator a
is 5. Our dyadic rational is 5/16
.
The crucial insight is that any potential error from the hardware-level calculation in Step 2 was also divided by 4 in Step 3. That tiny error has been scaled down into insignificance. The value y
is now so far from the .5
rounding boundary that we can make the decision with near-absolute certainty.
This n+2
rule is a beautifully pragmatic solution. It uses the speed of standard floating-point hardware but wraps it in a clever algorithm that nullifies its primary weakness, allowing the recursive real system to deliver on its promise of arbitrary and, more importantly, dynamically increasing precision.
When you see the Android calculator display 0.333333333333
, and you scroll to see more digits, you are not seeing a static, pre-computed number. You are essentially asking the Approximate(n)
function for a higher n
, and it is running this bisection algorithm on the fly to give you a more precise dyadic rational approximation, which is then displayed in decimal for you to read.
It's a system that doesn't pretend to have perfect knowledge. Instead, it offers a promise of perfect refinability. It's honest about the nature of approximation, and in doing so, it avoids the catastrophic, silent errors that plague less sophisticated systems. It's a calculator that truly understands the soul of the numbers it's computing.
Before wrapping this up, let me show you one more example of the brilliance of this algorithm.
Below is the iOS Calculator trying to calculate the ginormous (out of it’s range) value of \(e^{-1000}\).
Now below is the Android Calculate, calculating the same quantity
See the difference ? Interesting isn’t it ?
Subscribe to my newsletter
Read articles from Kunal Nayyar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
