Avoiding problems with JavaScript’s weird decimal calculations

1.2 + 1.1 may be ok but 0.2 + 0.1 may not be ok.

This is a problem in virtually every language that is in use today. The problem is that 1/10 cannot be accurately represented as a binary fraction just like 1/3 cannot be represented as a decimal fraction.

The workarounds include rounding to only the number of decimal places that you need and either work with strings, which are accurate:

(0.2 + 0.1).toFixed(4) === 0.3.toFixed(4) // true

or you can convert it to numbers after that:

+(0.2 + 0.1).toFixed(4) === 0.3 // true

or using Math.round:

Math.round(0.2 * X + 0.1 * X) / X === 0.3 // true

where X is some power of 10 e.g. 100 or 10000 – depending on what precision you need.

Or you can use cents instead of dollars when counting money:

cents = 1499; // $14.99

That way you only work with integers and you don’t have to worry about decimal and binary fractions at all.

2017 Update

The situation of representing numbers in JavaScript may be a little bit more complicated than it used to. It used to be the case that we had only one numeric type in JavaScript:

  • 64-bit floating point (the IEEE 754 double precision floating-point number – see: ECMA-262 Edition 5.1, Section 8.5 and ECMA-262 Edition 6.0, Section 6.1.6)

This is no longer the case – not only there are currently more numerical types in JavaScript today, more are on the way, including a proposal to add arbitrary-precision integers to ECMAScript, and hopefully, arbitrary-precision decimals will follow – see this answer for details:

  • Difference between floats and ints in Javascript?

See also

Another relevant answer with some examples of how to handle the calculations:

  • Node giving strange output on sum of particular float digits

Leave a Comment