I’m sorry, but all this “it isn’t useful” “there is no correct answer” “it makes no sense” stuff is not making a good impression on me. Floating-point arithmetic is not real number arithmetic and cannot be. Whatever we do, it’s going to behave differently from real arithmetic.
Let’s stipulate for the sake of argument that integer and rational arithmetic bounded only by the capacity of the memory is usually the right tool for the job and that quiet truncation, clamping, clipping etc are almost always about as bad a footgun as you can find. We recall that Dijkstra commented that
- we program for an unbounded machine
- if our program runs on a sufficiently large machine, we get the right answers
- if our program runs on an insufficiently large machine, we get wrong answers
(and we may get them fast, but what’s the point of that?) - if our program runs on a HOPEFULLY sufficiently large machine, we either
get the right answers or we get an apology from the machine
so we really want our machines to respond to integer overflow with exceptions.
(I briefly had the pleasure of programming in C on a MIPS machine where integer overflow did raise an exception, and it was a real joy to be able to trust the answers.)
We must also recognise that there is no shortage of problems, from trigonometry to (probability) significance testing, where integers and rationals are NOT the right tool for the job; that while it may be possible to tackle them using fixed point arithmetic with manual scaling, it’s unspfxkably difficult to get right and to maintain. There have been many proposals for rivals to floating-point, of which I’d say that Gustafsson’s unums and posits (see the book “The End of Error”) are to me the most impressive, but in terms of what we can buy today, aut punctum volans aut nihil.
It’s also worth digging into the meaning of the word “infinite”. It doesn’t necessarily mean a specific value, it simply means “not bounded”. A bit pattern that means “whatever this value is, it is definitely too be for me to put a finite bound on it” is etymologically entitled to the term “infinite”. If I compute 1.0e200 * 1.0e200, the fact that the computer cannot represent it as an IEEE/IEC double means that “definitely too big for me to put a finite bound on it” is an entirely accurate description of the result. “isinfinite(X)” is unfortunately named, but once you realise that it just means “istoobig(X)” it’s quite unobjectionable.
Now the thing is that IEEE/IEC arithmetic is specified. It cannot be specified to be consistent with real number arithmetic. Even if a computer had infinitely many bits of storage ($\aleph_0$) that would still be infinitely too few for almost all real numbers. So there are two questions. Making sense: is IEEE/IEC arithmetic consistent in its own terms. Usefulness: can the IEEE/IEC operations give us a sufficiently good approximation of real number arithmetic for a sufficiently wide range of problems for it to be worth vendors providing it and some programmers learning how to use it carefully.
The answers are yes, and yes.
BUT butchering the standard without having a thorough understanding of why the standard is the way it is means that doing floating point calculations in Erlang means having to follow different rules from floating-point in modern hardware and popular modern programming languages. As I wrote earlier, it feels like having to program in the 1970s.
Of course it could be worse. Like a Lisp system I sometimes use, whether floating-point overflow triggers an exception or returns an infinity might depend on a remotely set flag (worse still, a flag set in the debugger!).
You may legitimately say that IEEE/IEC special values don’t make sense to you.
That’s a paraphrase for “I don’t understand them” or “I don’t like them.”
They make sense. I’m not sure that I care for them myself.
But they are in the standard, they were designed and approved by floating-point experts,
and they’ve been implemented over and over, so they clearly make sense to some people who ought to know.
There is a hardware cost to providing the special values. Xerox Lisp Machines punted:
they copied the IEEE formats but not the IEEE semantics. Early SPARCs didn’t implement the special values in hardware; when they came up the machines trapped to software emulation. I remember showing a student how to make his neural network code substantially faster by changing x += yy to if (y > 1.0e-16) x += yy. Modern hardware doesn’t have this performance bug.
So now consider some contrived code:
z = …;
y = 1/(zz);
x += y;
And let’s imagine that z could get very big.
WITH infinities, this code is fine.
WITHOUT infinities, this has to be written as
z = …;
if (z < THRESH) {
y = 1/(zz);
x += y;
}
And I have to look at EVERY SINGLE operation to see if it might overflow or underflow (yes, there are systems where underflow raises an exception, and yes, I’ve used them, and yes, it is no less of an error than overflow, but MY WORD it is painful).
Oh, let’s go back to ‘not continuing the computation’.
Suppose I have
sum += 1.0/ddot(N, xs, 1, xs, 1);
Or in Erlang terms,
dot(Xs, Ys) →
dot(Xs, Ys, 0.0).
dot([X|Xs], [Y|Ys], S) →
dot(Xs, Ys, S+X*Y;
dot(, , S) →
S.
…
Sum1 = Sum0 + 1.0/dot(Xs, Xs).
Suppose there is a floating-point overflow somewhere in the calculation of <Xs,Xs>..
I don’t care where exactly it is and I don’t care if the loop completes before
telling me. In fact, continuing the calculation past the overflow will give me the
right answer: 1/<Xs,Xs> will be too small to represent and won’t make a difference.
I call this useful. If I want to do this, my code will be simpler due to the IEEE/IEC rules.