Rationals are if anything *harder* to implement than fixed

precision, but there is less disagreement about what they mean.

There are *some* semantic issues nonetheless.

Is 1/1 *exactly identical* to 1

or is it just numerically equivalent to it?

(Think of the fun you can have if you associated something

with 1/1 in a map, go looking for it with 1, and don’t find it!)

Is 2/4 *exactly identical* to 1/2

or are they just numerically equivalent?

Imagine the fun you can have when N and D are positive

integers and float(N/D) works but float((N*100)/(D*100))

doesn’t, thanks to floating-point overflow.

Semantic questions.

Should introducing rationals change the semantics

of N/D in expressions from its present “answer a float”

to the nicer “answer an integer or rational”?

Semantic questions.

As for integrating them with the built-in operations,

consider the present situation.

$(X)

where $ is a unary operation and X an expression.

There are four cases:

X is an immediate integer

X is a floating-point number

X is a bignum

X is something else.

A naive implementation allocates a new floating-point value in

the heap for every operation with a floating-point result.

HiPE used to work quite hard to avoid this.

Another approach that works nicely on 64-bit machines wes

developed by Andres Valloud, and yields very pleasant speedups

indeed, and that is to use 64-bit floating-point immediates,

where you steal a couple of bits out of the exponent for

tagging, so that extremely large numbers are boxes but most

numbers are not. Either way, avoiding float boxing as long

as you can is a good idea. Let’s ignore that for now.

Are you going to do $(X) with in-line code when X is a bignum?

No, you aren’t. So your in-line code is going to look like

if X is an small integer

compute Y from X

if Y is too big, trap out

if X is a floating point immediate

compute Y from X

if X is anything else, trap out.

Here “trap out” might mean “call a support function using the

cheapest-available calling method” or “invoke a user trap

handler”, whatever works well on your architecture.

Now the trap handler redoes the case analysis, and since there

is only one copy of it, it’s no harder to add cases for ratios

and fixed precision than it was to add cases for bignums and

boxed floats.

Now consider @(X,Y), a binary operation on numbers.

You might do

if X and Y are small integers

compute Z from X and Y

if it’s too big, trap out

if X and Y are immediate floats

compute Z from X and Y

if it doesn’t fit, trap out

trap out

or you might do a 2x2 case analysis. Once again,

you are NOT going to place copies of bignum division

in-line. There are now (immediate integer, bignum,

ratio, fixed, immediate float, box float, other) = 7

cases for each operand, so 49 cases all up, but that

can be simplified and it’s in one place, so no big

problem.

The result is that adding ratios and fixed precision

does bloat up the case analysis in the trap handlers,

but not insuperably, but it *doesn’t* affect the size

of the in-line code or the time of the fast paths.

There are all sorts of fiddly details to worry about,

but it’s the kind of stuff I call “tedious rather than

difficult”. What’s needed, as always, is

- agreement on the SEMANTICS
- agreement on the UTILITY of the addition
- resources to do the work.