Why 0.1 + 0.2 != 0.3 in C# (And Other Languages Too!)

Morteza JangjooMorteza Jangjoo
2 min read

One of the most surprising things for beginners in C#, and even experienced developers in many programming languages, is this:

Console.WriteLine(0.1 + 0.2 == 0.3); // Outputs: False

Wait, what? Isn’t 0.1 + 0.2 supposed to be exactly 0.3?

Let’s break it down.


Floating-Point Precision: The Real Culprit

C# uses IEEE 754 standard for floating-point arithmetic (like float and double), just like most modern programming languages such as Java, JavaScript, Python, and Go.

The number 0.1 and 0.2 cannot be precisely represented in binary floating-point. When you type:

double a = 0.1;
double b = 0.2;
double sum = a + b;
Console.WriteLine(sum);       // Outputs: 0.30000000000000004
Console.WriteLine(sum == 0.3); // False

What you’re really comparing is:

0.30000000000000004 == 0.3   // False

That small difference is caused by binary approximation. Floating-point types can't store most decimal fractions exactly — just like you can’t write 1/3 precisely in decimal (0.333...).


Is This Just a C# Problem?

Not at all.

Try it in Python:

print(0.1 + 0.2 == 0.3)  # False

In JavaScript:

console.log(0.1 + 0.2 === 0.3); // False

In Java:

System.out.println(0.1 + 0.2 == 0.3); // False

All these languages use the same floating-point standard — IEEE 754 — and therefore face the same issue.


How Should You Compare Floating-Point Numbers?

Instead of direct equality, compare using a tolerance (also known as epsilon):

bool AreEqual(double a, double b, double epsilon = 1e-10)
{
    return Math.Abs(a - b) < epsilon;
}

Console.WriteLine(AreEqual(0.1 + 0.2, 0.3)); // True

This is the recommended way when working with decimal fractions in computations.


What If You Need Exact Decimal Precision?

If you're dealing with money or financial calculations, use decimal instead of double in C#. The decimal type has a much higher precision and is base-10, so it avoids most of the binary rounding errors:

decimal a = 0.1m;
decimal b = 0.2m;
Console.WriteLine(a + b == 0.3m); // True

Conclusion

  • This is not a bug — it’s how floating-point arithmetic works.

  • Don’t use == for comparing float or double.

  • Use a tolerance (epsilon) or switch to decimal when precision is critical.

Understanding this subtle but important detail can save you from lots of debugging headaches!


✍️ I’m Morteza Jangjoo and “Explaining things I wish someone had explained to me”


0
Subscribe to my newsletter

Read articles from Morteza Jangjoo directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Morteza Jangjoo
Morteza Jangjoo