[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SRFI-77 with more than one flonum representation

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.




On Jul 3, 2006, at 2:59 PM, William D Clinger wrote:

To put your argument into perspective, consider double
precision on the IA32.  That architecture's implementation
of double precision arithmetic is not quite correct

Will:

I presume you're talking about '387 style arithmetic with 80-bit extended real registers. This architecture has no single-precision or double-precision operations at all, all operations act on extended- precision (80-bit) values in extended-precision registers; so it has *no* "implementation of double precision arithmetic". The '387 is a correct implementation of IEEE 754 arithmetic, yet one that most programmers seem to be uncomfortable with. I'm somewhat familiar with how gcc compiles floating-point code, and the ia32 back-end "lies" to the middle-end that there are single- and double-precision registers and operations available, and the various front ends "lie" to the programmer about what operations are available. I don't believe there is enough interest among the members of the gcc development community to "fix" this situation, especially since the SSE[23] floating-point computational models are faster on current CPUs and have a more "user-friendly" (as it were) conceptual model.

While
it is possible to perform IEEE-754 double precision arithmetic
correctly on the IA32 architecture by writing every intermediate
result to memory,

This is not sufficient to simulate double x double -> double operations---pre- and post-scalings, as well as setting the mantissa precision to 53 bits, are necessary to eliminate double rounding and to compute possible denormal results correctly. Storing intermediate results to memory are sufficient to simulate single x single -> single operations.

Brad