[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: inexactness vs. exactness

This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.




On Mon, 8 Aug 2005, Paul Schlie wrote:

> Thanks, I guess my point/question was predominantly related to the
> observation that there seems often little need for truly "exact"
> values beyond theoretical geometry and/or combinatorial mathematics,
> which often themselves only require a determinable finite precision;

This is a point on which you're going to lose.  Combinatorial
mathematics has developed subfields called cryptography,
compression, and correction codes which are fundamental to
modern networking.  If you're doing any of those and you
round anything off, you lose.

> while simultaneously observing there's often broader need for more
> precise potentially "inexact" values than typically supported by
> double precision floating point implementations; so it just seemed
> that in practice that it may be more useful to define that "exact"
> values need only be as precise as necessary to support the exact
> representation of the integer values bounded by the dynamic range of
> the implementation's "inexact" implementation, and their
> corresponding reciprocal values in practice (as you've implied);

NACK!  If you have limited precision, and the limited precision
affects the answer, then the answer is inexact.  PERIOD.  There is no
such thing as "exact numbers limited in precision" to *ANY* limit of
precision.  Once you go beyond a limit of precision and round
something, you aren't talking about exact numbers anymore.  Exact
numbers are, by definition, *infinitely* precise.  You may be talking
about limiting the representation size of exact numbers, thereby
decreasing the size of the set of exact numbers you can represent; but
that's not the same thing.

Infinite precision in finite memory arises when the number happens to
match our representational scheme very well; integers and ratios of
integers happen to be infinitely precise things we can represent in
finite memory - but the finiteness of our memory means that we can
only represent an infinitesimal fraction of those in any fixed amount
of space. Things work because our usual calculations tend to give us
results that are in the set of things we can represent; and when they
don't, we can throw an error, if it's last-bit critical, or return an
inexact number, if it isn't.

> thereby both providing a likely reasonably efficient "inexact" (aka
> double) >and a likely reasonably precise corresponding "exact"
> representation,

I will say it again.  Exact numbers aren't "reasonably" precise.  they
are *exact*, which is to say "infinitely" precise.  You are arguing
for extended-precision inexact numbers, and I agree with you that
these are needed and useful - but to call them exact is to confuse the
issue and does not help.

				Bear