Why 1234.6 * 100 result is 123459.99999999999

Erlang/OTP 24 [erts-] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [jit]

Interactive Elixir (1.13.4) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> 1234.6 * 10
iex(2)> 1234.6 * 100

Please help to clarify or any link/documents to aware about that?

1 Like

I searched a blog: Arbitrary-precision arithmetic in Erlang and Elixir | by Maxim Molchanov | Medium

1 Like

The blog post you found explains the phenomenon well. For future reference, the number section in the Erlang reference manual also explains why floats print the way they do.


Has this improved in OTP25 with the new float_to_list implementation? If not, would using [shortest] by default for ~p be an acceptable change?

1 Like

Such is the nature of binary floating-point arithmetic.
ok@tarski:~/Downloads$ python3
Python 3.6.9 (default, Mar 15 2022, 13:55:28)
[GCC 8.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

1234.6 * 100


Here’s a floating point converter tool to play around with. Try even “simple” numbers like 0.1 and 0.01.



Generally you should avoid using float() in your applications. Floating point math performs much worse than integer math and leads to results which violate the principle of least astonishment.

Especially do not use floating point for monetary values. Money is commonly represented with decimal notation however that does NOT mean it should be stored as float(). Doing so will make your accountant very sad, he likes it when columns add up! Monetary amounts should be integer multiples of cents internally and converted to/from the human readable decimal form. If you require fractions of cents then decide on a minimum unit (i.e. a thousandth of a cent). This way performance is better and math actually works the way you expect it to.


Integer arithmetics makes everything complicated where you need to perform a division. You can also not properly interpolate etc. pp.

Floats are perfectly fine as long as you round in the right places (or use the new shortest representation :)) and ensure that your calculations stay well below the accuracy limits of 64bit doubles. I have been developing and operating an algorithmic energy trading application in Erlang for 10 years, and we never had a single(!) issue with float arithmetic, neither with middle nor back office, since seitching exclusively to float arithmetics about 7 years ago.


afaict, ~p should already be using [shortest] in OTP25 as it calls it under the hood. If it is not the case, it is a bug.

1 Like

Everyone has already pointed variations of this out, but in most programming languages, floating point numbers are approximations of numbers. So when reading bindings with them, treat the = as (approximately equal) in your head.


any idea why dc works?

$ dc
1234.6 100 *p
1 Like

any idea why dc works?

dc apparently doesn’t use floating-point. man dc says:

dc is a reverse-polish desk calculator which supports unlimited precision arithmetic.

1 Like

No, floating point NUMBERS are not approximations.
(At least not in IEEE/IEC floating point arithmetic.)

Floating point NUMBERS are exact rational numbers.
Floating point OPERATIONS are approximate.
It’s an important distinction to understand.
The problem with 1234.6 * 100 is not the 100 and
not the 1234.6. It’s with the CONVERSION operation
you didn’t see because it’s invisible:
decimal_to_binary(“1234.6”) * integer_to_float(100)
There’s an exact decimal number which is converted
to a binary floating point number and it is that
conversion where things went wrong. In IEEE arithmetic
operations are supposed to deliver results which are
as close as possible to the right answer while being
representable as binary floating point numbers.
The multiplication is also approximate.

Why is it important to understand that it’s the
operations, not the numbers, that are approximate?
Because if you’re trying to do numerical analysis,
it’s the operations you have to keep track of (and
beware of!)

1 Like

I normally use bc(1), not dc(1).
The bc(1) manual page in Linux is explicit that
“all numbers are represented internally in DECIMAL”
“all computation is done in DECIMAL”.
The dc(1) man page is not explicit about this, but
there are hints that it uses decimal internally
whatever the input and output radices.

Is that true? For example, you can type in 0.1 in the IEEE converter linked above. You do not get that number but rather 0.100000001490116119384765625. Yes, that number is an exact rational number, but it is not 0.1 and is instead an approximation to it. It’s a semantical argument to say they aren’t approximate.

Without getting into the nitty gritty details of IEEE floating point, I think it’s still best to at least think of them as approximations. That’s because with that mindset, you know to never directly compare them.

1 Like

2 posts were split to a new topic: Is there an Erlang library to deal with Money?

Why is this so hard to understand?
Here is what happens when you type 0.1 into your IEEE converter:
x := “0.1” % this is a string
y := convert_decimal_string_to_IEEE_number % this is a number
The number y IS NOT APPROXIMATE. It is what it is, purely and
exactly with no approximation anywhere.

What’s approximate is the OPERATION of converting from a
decimal string to a binary float. If you are using C or Erlang
&c you never have 0.1 as a number. It’s a string, which the
compiler converts to a float using an APPROXIMATE OPERATION.
The string x is exactly what it is and nothing else.
The number y is exactly what it is and nothing else.
y approximates the number that x would have been had x been a
number, but that is not because x is approximate, it is
Once more with feeling: y isn’t approximate; y is EXACT.
y is a CLOSEST exact binary float to 0.1 but that is not because
y is in itself approximate, but because the EXACT number y is as
close as you can get to 0.1 and still be representable as a
binary float.

There is another way to get an approximation to 0.1 and
that is 1.0/10.0 . Now in this case, converting the string
1.0 to binary float gives you an EXACT 1 (not an approximate 1)
and converting 10.0 to binary float gives you an EXACT 10 (not
an approximate 10) and the damage is done by the division
OPERATION. Exact 1 approximately divided by exact 10 gives
you an exact result that approximates 0.1 because the division
OPERATION is approximate. Not the numbers.

It may be worth pointing out that the current IEEE/IEC standard
covers decimal floating point numbers as well as binary ones,
that the current COBOL standard defines its arithmetic in terms
of decimal floats, that current z/Series and POWER computers
support decimal floats in their instruction sets, and that there
is a reference implementation of IEEE decimal floats in portable
C, that spreadsheets could be a great deal less confusing if they
used decimal floats, and that the REXX programming language has
used decimal floating point for a long time. Erlang could use
decimal floating point if anyone seriously wanted it to. (This
would also finally make sense of JSON, if anyone cared for JSON
to make sense.) The snag, from the point of view of Erlang
applications, is that x86, x86-64, and ARM processors do not
support decimal floats in hardware. Erlang is trading the
occasional surprise for people who don’t understand binary floats
for efficiency and compatibility with other programs and interfaces.
(Does the current version of ASN.1 allow for decimal floats?)

1 Like

I think you’re making a semantical argument, because I’m not seeing anything that shows I have a misunderstanding.

As far as I know, the number 0.1 cannot be exactly represented in IEEE floating point. Thus, any IEEE floating point number, which may be an exact number itself, that is used to represent or get close to 0.1 is an approximation to 0.1 by any remotely normal understanding of the word approximate.

To dance around the idea of what is or is not an approximation is a semantical argument. For example, in normal mathematics, 22/7 ≈ pi. To say that this isn’t an approximation because both the left-side and right-side numbers are exact representations of themselves is a weird way to describe the situation.

Maybe the best way to say this is that IEEE floating point numbers are approximations to what you would expect, but they themselves are exact numbers in the sense that they are not approximations of the number they directly encode/represent. However, I’m not sure how the latter is useful in day-to-day programming.

Semantic arguments are the most important arguments there are.
“x is an approximation to y” is not a property of x.
Such a statement may be consistently affirmed by one person
(for whose purposes x is close enough to y) yet denied by
another (for whose purposes x is NOT close enough to y).
Let’s take the number
0x1.999999999999a0p-4 (an EXACT number in base 2)

0.10000000000000000555 (how it prints in decimal).

There is no intrinsic sense in which it is an approximation
of 0.1 rather than an approximation of 0.10000000000000000333.
Indeed, there is no intrinsic sense in which is is not an
approximation to 0.3. The number is, in and of itself, an
EXACT value which represents itself and no other number.
What, if anything, it approximates depends on who is doing
the deciding and how much accuracy they need.

It is important to understand the distinction between
approximate NUMBERS and approximate OPERATIONS because you
need to understand where the errors are introduced in order
to deal with them sanely.

Decimal floating point arithmetic may perhaps make this
clearer. Consider the decimal floating-point value 0.1.
This number is exact. Even the most fanatical “binary floating
point numbers are approximate” believer will accept that
the decimal floating point number 123.4567df is exactly
123.4567. However, _Decimal32 x = 123.4567df; x*x delivers
an approximate result for the square of x, not because x is
approximate but because it isn’t and * IS.

It’s a very simple straightforward model that won’t mislead you:
floating point NUMBERS are exact,
floating point OPERATIONS are not,
including text <-> number conversions.
What counts as an approximation depends on who is doing the
counting and why.

There are other computer arithmetic systems including
interval arithmetic, triplex arithmetic, and unums, where
different judgements might be made, but Erlang does not
support any of those.

However, the


" in normal mathematics, 22/7 ≈ pi"
Not just no, but "HELL no!"

I once had the pure joy of debugging someone else's program
where the source code used a 64-bit approximation to pi,
while the library used an 80-bit approximation Failure toEven
converge was the least of it.

Floating point arithmetic is not normal mathematics (real normal
mathematics) and pretending it is will get you in real trouble.