[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: arithmetic issues




It's been an interesting discussion.  I started with a some
well-defined prejudices and several barely-connected ideas,
and I've been refining them - discovering what I think
is the "right thing" - by agreeing and disagreeing with points
people have made.

First of all, I want to say explicitly that I agree with the
idea that Scheme (or any good lisp) ought to express algorithms,
not hardware implementation.  So I reject the idea that anything
in numerics should depend on a particular representation, IEEE
sanctioned or not.

But that doesn't mean we have infinite memory available to run
programs on, and there is a vital purpose to serve (a necessary
feature) in knowing how much computation to expend and space to
allocate for a particular operation or a particular result.

I was surprised by (and agree completely with) the suggestion that
there should be multiple different functions for addition (and other
functions) depending on what behaviors you want; is it more important
to preserve exactness given exact arguments, or more important to
produce a result of a known and finite size?  Should the size depend
on the size of the arguments, or the degree to which the arguments'
precision is known, or just be equal to some constant?  Would you
prefer modular arithmetic with some user-definable modulus that
happens to be ridiculously fast if the user chooses 2^32?  etc...
These are all different functions.  We should probably give them names
and standard semantics and build specialized math libraries around
them.

We can even implement these math libraries in portable R5RS scheme
code (assuming the existence of bignums of course) noting drily that
the implementor can of course provide a much faster solution on most
current architectures. This was like a light going on; it's just plain
obvious in retrospect.

					Bear