This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.
| Date: Sun, 19 Jun 2005 10:19:29 -0700 (PDT) | From: bear <bear@xxxxxxxxx> | | On Thu, 16 Jun 2005, Aubrey Jaffer wrote: | | > Can you give an example of a calculation where you expect | > that choosing a reduced precision will reap a large | > benefit? | | Reduction in precision beyond the level of a small float | size supported by the hardware is rarely useful, even when | performing binary tricks, but: | | It often happens in neural networks (read: my day job) that | being able to store a bunch of floats compactly (level-2 | cache size) results in dramatic speedups, and in such cases | (in C) I use arrays of 32-bit floats rather than 64-bit | doubles. Implementations supporting SRFI-47 or SRFI-63 can provide arrays of 32-bit floats. | Since R5RS strongly recommends "precision equal to | or greater than the most precise flonum format supported by | the hardware," and further because in scheme I can't in | general rely on a particular hardware representation without | indirections, tag bits, and other encapsulating structures | which will blow the cache, I can't really do this in R5RS | scheme. That is why SRFI-47 (and SRFI-63) was created. A homogeneous array can be stored as a contiguous block which has just one tag identifying its type. SCM and SLIB/Guile do this. | I can do it using implementation- specific extensions in Chicken | and Bigloo, and I can do it in Stalin, another Lisp-1 dialect | that's largely similar to scheme. You could also do it with SCM or SLIB/Guile (using SRFI-47 or SRFI-63). | But a couple of years ago, I had a (toy) project where I was | simulating orbits several centuries into the future in a | game where the objective was to get a hypothetical | spacecraft from L3/Earth to pluto using only 100 m/s of | delta-vee plus orbital mechanics. And in that project, | having 512-bit precise reals (thanks to Chicken which | allowed itself to be recompiled with alternate real | precision) was *NECESSARY*, since even with scaling, using | "doubles" would have lost crucial information in the | underflow. Of course, it took a long long time to find a | good solution, but search strategies for a good solution | were what the game was about. Would weakening the "most precise" requirement to a recommendation improve Scheme as a platform for such arithmetics?