[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: infinities reformulated

This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.

On Sun, 19 Jun 2005, Aubrey Jaffer wrote:

> | It often happens in neural networks (read: my day job) that
> | being able to store a bunch of floats compactly (level-2
> | cache size) results in dramatic speedups, and in such cases
> | (in C) I use arrays of 32-bit floats rather than 64-bit
> | doubles.

> | But a couple of years ago, I had a (toy) project where I was
> | <clip>. And in that project, having 512-bit precise reals <clip>
> | was *NECESSARY*, since even with scaling, using "doubles" would
> | have lost crucial information in the underflow.

> Would weakening the "most precise" requirement to a recommendation
> improve Scheme as a platform for such arithmetics?

It's hard to know what to do.  No portable code relying on
particular float sizes can be written on the basis of R5RS.
The suggested change of weakening the requirement to a
recommendation would not enable such code, so the situation
for specialized calculations would not be improved.  But
I think maybe code like that *ought* to be the domain of
implementation-specific extensions rather than scheme

Because I don't think that scheme ought to concern
itself overmuch with the underlying hardware representations,
I wouldn't like the specification of an exact floating
point representation to become part of the language
standard.  But I would like to be able to tell the
system what minimum precision I need and let it decide
what underlying representation it can use to most
economically and effectively meet that requirement.

It is, and ought to remain, an error for code to
*rely* on a particular roundoff or wraparound error
resulting from a hardware operation on a limited-precision
number, and therefore specifying an exact size rather
than a minimum size for inexact numbers is not "the
right thing."

What I would *like* is to have a way to specify what
precision to use for inexact-number calculations in a
given (ideally dynamic, but given scheme's design more
properly lexical) scope.  I would like to be able to

(with-precision 512 220 expr)

in order to let the compiler know that if at least 512 bits
of mantissa and 220 bits of exponent are retained for inexact
calculations, expr (whether a single number, or a function
call) will not result in an intolerably erroneous result.
The system, if capable, may allocate and use inexact numbers
of that precision or higher, or evaluate the expr using only
exact numbers, or otherwise, must report a violation of an
implementation restriction.  And likewise, if I say

(with-precision 10 6 expr)

it would be a promise that 10 bits of mantissa and 6 of
exponent are enough to get results tolerable for my purposes
and the compiler could use 32-bit floats, or even 16-bit
floats of the suggested format, if the hardware and compiler
happen to support exactly that.  But if it happens to be
a martian architecture that uses words of 27 ternary trits
instead of 32 binary bits, that would be okay too as long
as it were capable of *at least* that precision.

In this system we wouldn't have to worry about comparisons
between inexact numbers of different precision, because
being in the same scope, all inexact numbers would have
the same precision.  But you could still use the precision
you actually need for your calculations and not have the
system wasting resources with too much precision where it's
not needed.  And it insulates code enough from the hardware
for future systems no matter how strange or unexpected, to
not be required to simulate the roundoff errors of older
systems nor use restricted representations where doing so
would slow them down.

That's what I'd like.  But is it reasonable to require it?
I dunno.  Maybe it's proper SRFI material, as long as
everything else under the sun in terms of numeric fixes is
being proposed.