[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: infinities reformulated

This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.




On Thu, 16 Jun 2005, Aubrey Jaffer wrote:

> Can you give an example of a calculation where you expect
> that choosing a reduced precision will reap a large
> benefit?

Reduction in precision beyond the level of a small float
size supported by the hardware is rarely useful, even when
performing binary tricks, but:

It often happens in neural networks (read: my day job) that
being able to store a bunch of floats compactly (level-2
cache size) results in dramatic speedups, and in such cases
(in C) I use arrays of 32-bit floats rather than 64-bit
doubles.  Since R5RS strongly recommends "precision equal to
or greater than the most precise flonum format supported by
the hardware," and further because in scheme I can't in
general rely on a particular hardware representation without
indirections, tag bits, and other encapsulating structures
which will blow the cache, I can't really do this in R5RS
scheme.  I can do it using implementation- specific
extensions in Chicken and Bigloo, and I can do it in Stalin,
another Lisp-1 dialect that's largely similar to scheme.

But a couple of years ago, I had a (toy) project where I was
simulating orbits several centuries into the future in a
game where the objective was to get a hypothetical
spacecraft from L3/Earth to pluto using only 100 m/s of
delta-vee plus orbital mechanics.  And in that project,
having 512-bit precise reals (thanks to Chicken which
allowed itself to be recompiled with alternate real
precision) was *NECESSARY*, since even with scaling, using
"doubles" would have lost crucial information in the
underflow.  Of course, it took a long long time to find a
good solution, but search strategies for a good solution
were what the game was about.

			Bear