[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: straw-man [was Re: arithmetic issues]

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.




On Sat, 21 Jan 2006, Paul Schlie wrote:

>- Upon more reflection, given that it's likely unreasonable to presume
>  that an <exact> implementation must (or even could) reliably support
>  infinitely precise calculations/representations, it must then support
>  finite precision calculations, thereby necessitating its definition
>  of overflow semantics, basically leaving the choice of either modular
>  or saturating semantics; where either may be considered reasonable,
>  it seems undisputed that modular semantics tend to be the simplest
>  and most natural default of most machine and/or SW implementations,
>  and does not preclude the throwing of a recoverable overflow exception
>  if supported by the base implementation.

The thing is, I don't ever want it to be considered "wrong" for
someone to add #e1.23456 and #e6.54321 and get exactly #e7.77777.
I mean, that's what the numbers add up to, right?  And if the finite
representation chosen by an implementor uses powers of ten rather
than (or in addition to) powers of two, that's exactly what the
answer will be.  If someone is relying on this answer to be inexact,
or to exhibit a particular numeric error based on a presumed binary
representation (ie, relying on the operation expressed by '+' to
be some particular approximation of addition rather than addition
itself) then while they might be right for a lot of particular
implementations, they are wrong in first principles.

Similarly, most of the computable reals have a finite, exact and
reasonably short representation, although it may be a "tree"
representation involving logarithms, exponents, square roots, factors
of irrational constants like e and pi, etc.  Such representations
are something used a lot by specialized mathematical applications like
Macsyma, although they're something of a pain to implement the basic
functions for, because people want correct answers.  No matter what
representation scheme you pick, there'll be numbers you can't represent
exactly - but in principle, I don't want to ever forbid the "generic"
operations on exact numbers from returning exact results, period.

If an implementor has gone out of his or her way to build a system
in which say, the result of log2(327) is exact, I say more power to
them and I don't want to see a bunch of requirements that can *only*
be implemented effectively for ieee-float style representations.

I'd much rather see explicit operations like (ieee53 x) which
returns the closest inexact number to x that is a member of the set
of numbers representable as an ieee float with a 53-bit mantissa,
or (ieee53! x) which mutates x forcing it to be that number.
If an implementation is concerned with speed, it's already using
some format like this for all its inexact numbers, and these become
the identity function and a no-op, respectively, get optimized out,
and do not interfere with speed.  If an implementation is more
concerned with correctness and uses macsyma-like numbers, then at
least ordinary code is not subject to numerical errors caused by
the choice of format unless the programmer explicitly requires
that choice, and correctness does not suffer.

The abstract of SRFI-77 talks about the need for less variety and
freedom in numeric implementation; Aside from the thought that the
numeric tower short of polar-complex numbers should be required
rather than recommended, I simply do not agree.  I see inexact
formats, especially where exact results are possible and representable,
as a source of mathematical errors, and I think that it is the
implementor's responsibility, insofar as cleverness allows and
insofar as s/he cares about correctness, to produce a system
that limits mathematical errors to exactly those explicitly
requested by the programmer.

				Bear