Exploring Floating-Point Precision: Why 9.21 + 1.66 Equals 10.870000000000001

Aryan ChaurasiaAryan Chaurasia
3 min read

When working with programming languages, you may have encountered puzzling results in floating-point arithmetic. A classic example is the seemingly simple addition:

console.log(9.21 + 1.66); // Output: 10.870000000000001

At first glance, this might look like a bug or an error in the language. However, this behavior is rooted in the very nature of how computers handle numbers.


The Roots of Floating-Point Representation

Binary vs. Decimal

Computers operate in binary, meaning they represent all numbers using bits (0s and 1s). While integers are represented precisely in binary, most decimal fractions cannot be expressed exactly in base-2. For example, the decimal number 9.21 does not have an exact binary representation. Instead, it is stored as an approximation. The same applies to 1.66.

This approximation occurs because the number of bits available to represent a floating-point number is finite. In many systems, the IEEE-754 standard is used, which allocates a fixed number of bits for the sign, exponent, and mantissa (fractional part). This standard offers a good balance between range and precision but comes with a trade-off: not all decimal numbers can be represented exactly.

How It Affects Arithmetic

When you add two approximated numbers, the tiny errors in their binary representations can combine, leading to results that deviate slightly from what you expect. In our example, while mathematically 9.21 + 1.66 should equal 10.87, the computer computes it as 10.870000000000001. This is due to the inherent imprecision of binary floating-point arithmetic.


Diving Into the IEEE-754 Standard

The IEEE-754 standard was designed to provide a universal method for representing floating-point numbers across different hardware and software platforms. This consistency is invaluable for ensuring that numerical computations behave similarly regardless of the underlying system.

Why Not “Fix” It?

  • Hardware Constraints: Modern CPUs are designed to adhere to the IEEE-754 standard. Changing the fundamental way numbers are represented would require a complete overhaul at the hardware level—a monumental task given the vast ecosystem of devices.

  • Software Compatibility: Programming languages and libraries are built around these hardware specifications. Altering the representation method could lead to compatibility issues, breaking countless applications that rely on the current standards.

  • Performance Considerations: The IEEE-754 format is optimized for performance. Alternatives that provide exact decimal arithmetic (like arbitrary-precision libraries) tend to be slower and more resource-intensive. For many applications, the slight imprecision is an acceptable trade-off for speed.


JavaScript Example: Addressing Precision with a Library

JavaScript’s built-in arithmetic uses the IEEE-754 double-precision floating-point format. This means that the imprecision we saw is inherent:

console.log(9.21 + 1.66); // Output: 10.870000000000001

For applications where precise decimal arithmetic is crucial—such as financial computations—you can use libraries like decimal.js or bignumber.js.

Here’s an example using decimal.js:

// First, install the library with: npm install decimal.js
const Decimal = require('decimal.js');

const a = new Decimal('9.21');
const b = new Decimal('1.66');
const result = a.plus(b);

console.log(result.toString()); // Output: 10.87

In this example, decimal.js handles numbers as exact decimals, ensuring that the sum of 9.21 and 1.66 is precisely 10.87.


Conclusion

The phenomenon of 9.21 + 1.66 resulting in 10.870000000000001 is not a bug—it’s a natural consequence of the limitations of binary floating-point representation as defined by the IEEE-754 standard. This behavior underscores an important lesson in computer science: understanding the underlying representations and limitations of your data types is crucial.

For most applications, the slight imprecision is an acceptable compromise for performance and compatibility. However, when precision is critical, robust alternatives like decimal.js in JavaScript or similar libraries in other languages provide a reliable solution by trading off some speed for accuracy.

By being aware of these nuances, developers can make informed decisions about which numerical representations to use and how to handle potential errors in arithmetic operations.

Happy coding!

0
Subscribe to my newsletter

Read articles from Aryan Chaurasia directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aryan Chaurasia
Aryan Chaurasia