This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.
| Date: Sun, 19 Jun 2005 22:46:33 -0700 (PDT) | From: bear <bear@xxxxxxxxx> | | On Sun, 19 Jun 2005, Aubrey Jaffer wrote: | | > | It often happens in neural networks (read: my day job) that | > | being able to store a bunch of floats compactly (level-2 | > | cache size) results in dramatic speedups, and in such cases | > | (in C) I use arrays of 32-bit floats rather than 64-bit | > | doubles. | | > | But a couple of years ago, I had a (toy) project where I was | > | <clip>. And in that project, having 512-bit precise reals <clip> | > | was *NECESSARY*, since even with scaling, using "doubles" would | > | have lost crucial information in the underflow. | | > Would weakening the "most precise" requirement to a recommendation | > improve Scheme as a platform for such arithmetics? | | It's hard to know what to do. No portable code relying on | particular float sizes can be written on the basis of R5RS. That sounds like the wrong goal! What one wants for reduced precision floats is to write R5RS code which will run with reduced precision in implementations which support it -- and run with R5RS (full) precision in other implementations. SRFI-63 provides this mechanism. Calculating with 512-bit floats seems too esoteric to be a core language feature; but it is reasonable that R6RS should not obstruct implementations wishing to support it. | The suggested change of weakening the requirement to a | recommendation would not enable such code, so the situation | for specialized calculations would not be improved. I am not understanding you here. If "most precise" is a recommendation (not a requirement), then an implementation could have an extension where greater precision was available, but not used for all calculations. | But I think maybe code like that *ought* to be the domain of | implementation-specific extensions rather than scheme itself. PARI/GP (http://pari.math.u-bordeaux.fr/) has a numeric C-library where the numeric precision is an argument to each transcendental function. | Because I don't think that scheme ought to concern itself overmuch | with the underlying hardware representations, I wouldn't like the | specification of an exact floating point representation to become | part of the language standard. But I would like to be able to tell | the system what minimum precision I need and let it decide what | underlying representation it can use to most economically and | effectively meet that requirement. ... Declaring minimum precisions doesn't seem to play well with Scheme's latent types. What about latent precision? "Precise-numbers" would have an accessible precision: (precision w) which would return the precision of precise-number `w'. Given precise-number arguments, numerical operations would return a precise-number with precision dependent on the arguments, according to rules which work most of the time (biased toward higher precision). For those cases where the rules might not work as desired, there would be a procedure: (in-precision w k) which would return a precise-number having value near `w' and having precision `k'.