[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Multiple precisions of floating-point arithmetic

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.

On Feb 26, 2006, at 2:00 PM, bear wrote:

On Sun, 26 Feb 2006, Bradley Lucier wrote:

Then Colin Percival published his paper "Rapid multiplication modulo
the sum and difference of highly composite numbers",


which gives new bounds for the error in FFTs implemented in floating-
point arithmetic.  This allows you to use FFTs to implement bignum
arithmetic with inputs of size 256 * (1024)^2 bits in 64-bit IEEE
arithmetic with proven accuracy.

This is a very interesting potential implementation technique.
Is there a URL for this article that someone who is not a member
of the American Mathematical Society can access?  Or a publication
we can find at a local print library?


Sorry, I didn't realize that that link was restricted to AMS members. You can also get it at


The citation is

Colin Percival, Rapid multiplication modulo the sum and difference of highly composite numbers, Mathematics of Computation, Volume 72, Number 241, Pages 387-395, 2002.

The bignum implementation of Gambit-C uses this technique (hopefully correctly ;-).