[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: inexactness vs. exactness

This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.




On Sun, 24 Jul 2005, Alex Shinn wrote:

>If you take the idea that inexacts represent single real values,
>then all equations have to be qualified with "so long as the values
>and intermediate results remain within the precision the system
>provides."

Yes.  That's exactly what the "inexact" status described in
the standard means, and exactly how it's used.

> If instead you assume that inexacts represent ranges, then the
> qualification instead becomes "all values within the range are
> indistinguishable."  In the above contradiction, x and y when
> represented on the computer are indistinguishable from 1.0,
> so without any steps at all you can conclude x = y = 1.0.
> This goes for any real number, not just 1.0.

But that's exactly what the above contradiction means.
x = y = 1.0; therefore there is no range here, period.

>    It is important to distinguish between the mathematical numbers,
>    the Scheme numbers that attempt to model them, the machine
>    representations used to implement the Scheme numbers, and
>    notations used to write numbers.
>
> Perhaps it is best to leave it this way and let individual people
> (and implementations) apply interpretations to those numbers
> as suits them.

Perhaps.  But there is a fundamental point to be made here; the
      combination of base-10 external notation and base-2 internal
      representation makes for some strange numeric properties.
      assuming for a moment that =, >, and < compare strictly
      numeric values, and also assuming that S and L denote
      base 2 representations of different length, we get:

(= 1.1L1 1.1S1) => #f
(< 1.1L1 1.1S1) => #f
(> 1.1L1 1.1S1) => #t
      ;; 1.1 isn't representable in base 2, so extending it
      ;; to long representation gives a different number on systems
      ;; where the underlying representation is base 2.


(= 1.25L1 1.25S1) => #t
(< 1.25L1 1.25S1) => #f
(> 1.25L1 1.25S1) => #f
      ;; But 1.25 is, so using long or short representation, these
      ;; are literally the same numeric value.

On the other hand, if the system uses BCD or rationals or
some equivalent to store both L and S formats, you get consistency.

(= 1.25L1 1.25S1) => #t   (= 1.1L1 1.1S1) => #t
(< 1.25L1 1.25S1) => #f   (< 1.1L1 1.1S1) => #f
(> 1.25L1 1.25S1) => #f   (> 1.1L1 1.1S1) => #f

So the behavior of the comparison predicates depends on the
internal representation and how well it matches the external
representation and what kind of rounding was done to represent
the answer to a calculation.

I think the idea of "neighborhoods" was mainly an excuse to give these
operations semantics that somebody thought were desirable in comparing
numbers of different internal representation.  But I think that
covering up numeric differences that are artifacts of the representation
system is not actually desirable.  The problems are that it will also
cover up numeric differences which are not artifacts of the
representation system, and that it will cover up differences which
will lead to different results in subsequent calculations.

The current standard states that comparison operations are not reliable
on inexact numbers.  This is demonstrably true.  Until someone has a
truly brilliant idea about how to deal with inexact control flow, I
think we should probably leave it at that, because there is a precipice
here; the need to return a sensible value from a comparison predicate
on inexact numbers creates a compelling case for inexact booleans.
But the semantics of control structures like if and cond when dealing
with inexact booleans are at best murky, until that truly brilliant
idea comes along.

				Bear