[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Arithmetic issues




On Oct 28, 2005, at 2:36 PM, Alan Watson wrote:

* Should a minimum precision be required for fixnums or flonums?

In an implementation written in C (i.e., without access to the carry flag) running on a 32-bit processor, it might make sense to use 16-bit fixnums to make it easy to check for overflow and the need for bignums.

This is not really necessary. See "Hacker's Delight" by Henry S. Warren, who gives algorithms for checking for overflow in the basic arithmetic operations for signed and unsigned arithmetic. In fact, this information is in chapter 2, which is online at

http://www.hackersdelight.org/

I encourage anyone who has ever needed to twiddle a bit or two to consider buying the book.

One caveat about the algorithms for signed overflow: In C, unsigned integer arithmetic that overflows is defined to be calculated modulo two to some power; the behavior of signed integer arithmetic overflow is undefined. (At the same time, all the computer architectures that I know of use the same instructions for signed and unsigned arithmetic, so in *fact* signed arithmetic overflow is well-defined to do what you think it does.) Some compilers make use of the fact that signed integer overflow is undefined to enable certain optimizations. This is certainly true for gcc 4.0 and higher, for which you should give the -fwrapv flag to say that signed overflow does the right thing. The algorithms in "Hacker's Delight" assume that signed integer overflow does the right thing.

The latest Gambit-C 4.0 beta generates safe inline fixnum operations using these algorithms to detect fixnum overflow.

Of course, in a sense, you have access to the carry flag because "long long" is at least 64-bits.

I'm no C expert, but somehow I doubt that this is true. (I first wrote "strictly true", but I guess it's either true or false.)

Brad