[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: inexactness vs. exactness



> From: Alex Shinn <alexshinn@xxxxxxxxx>
>> William D Clinger <will@xxxxxxxxxxx> wrote:
>> Suppose (for a contradiction) that inexact numbers do denote
>> neighborhoods.  Then let [x, y] be the neighborhood denoted
>> by the inexact number 1.0.  If 0 < x <= y, then the inexact
>> number (* 1.0 1.0) denotes [x*x, y*y].  If (* 1.0 1.0)
>> evaluates to 1.0, then 1.0 denotes both [x, y] and [x*x, y*y],
>> hence x = x*x and y = y*y.  Therefore x = 1.0 = y, so under
>> our assumptions, the inexact number 1.0 really denotes only
>> itself.
> 
> Does it even make sense to make this kind of comparison?  It
> seems you're knocking down the straw man that is limited precision
> floating point.  One can demonstrate all kinds of contradictions if
> you compare computer math with real math, regardless of what
> theoretical basis you're using.
> 
> If you take the idea that inexacts represent single real values,
> then all equations have to be qualified with "so long as the values
> and intermediate results remain within the precision the system
> provides."
> 
> If instead you assume that inexacts represent ranges, then the
> qualification instead becomes "all values within the range are
> indistinguishable."  In the above contradiction, x and y when
> represented on the computer are indistinguishable from 1.0,
> so without any steps at all you can conclude x = y = 1.0.
> This goes for any real number, not just 1.0.
> 
> As the introduction to numbers in R5RS states
> 
>     It is important to distinguish between the mathematical numbers,
>     the Scheme numbers that attempt to model them, the machine
>     representations used to implement the Scheme numbers, and
>     notations used to write numbers.
> 
> Perhaps it is best to leave it this way and let individual people
> (and implementations) apply interpretations to those numbers
> as suits them.

Given that in practice both exact and inexact values are presumed to
represent finite values to some likely bounded practical precision;
might it be reasonable to roughly define that all numerical values
may be considered "abstractly infinitely precise" values "rounded" to
the precision defined by their respective implementations, (where it
is presumed that retained precision of exact values will be at-least
as precise as inexact values, and ideally have the equivalent bounds
on their representable magnitudes, so that +- inf may represent the
bounds of either representation equivalently).

Thereby in effect, abstractly:

 <exact-value> :: (precise->exact <some-value>)

 <inexact-value> :: (precise->inexact <some-value>)

and correspondingly:

 (<function> <some-value>) => <exact-value> ::
 (precise->exact (<precise-function> <some-value>)) => <exact-value>

 (<function> <some-value>) => <inexact-value> ::
 (precise->inexact (<precise-function> <some-value>)) => <inexact-value>

thereby again abstractly:

 (<= (- <some-value> (exact-interval-precision- <some-value>))
     <some-value>
     (+ <some-value> (exact-interval-precision+ <some-value>)))
  => #t

where the exact-interval-precision for any value at or beyond an
implementation's magnitude bounds is itself infinite, thereby all
abstractly precise values of greater magnitude are considered
equivalent regardless if represented as exact or inexact values.

  exact:     #e-1/0  ..    #e-0/1 0 #e+0/1   ..  #e+1/0
                |||||||||||||||||||||||||||||||||||||
precise: -1/0        ..      -0/1 0 +0/1     ..        +1/0
                |  |  |  |  |  |  |  |  |  |  |  |  |
inexact:     #i-1/0  ..    #i-0/1 0 #i+0/1   ..  #i+1/0

(where I understand that the notion of an exact value being
 potentially imprecise is a bit of a paradox, but personally
 I'd rather be able to define PI for example to a potentially
 much greater, albeit finite, precision of an exact, than be
 limited to defining it's value at the precision of an inexact
 floating point representation for example.)