This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.
On 7/24/05, William D Clinger <will@xxxxxxxxxxx> wrote: > > Suppose (for a contradiction) that inexact numbers do denote > neighborhoods. Then let [x, y] be the neighborhood denoted > by the inexact number 1.0. If 0 < x <= y, then the inexact > number (* 1.0 1.0) denotes [x*x, y*y]. If (* 1.0 1.0) > evaluates to 1.0, then 1.0 denotes both [x, y] and [x*x, y*y], > hence x = x*x and y = y*y. Therefore x = 1.0 = y, so under > our assumptions, the inexact number 1.0 really denotes only > itself. Does it even make sense to make this kind of comparison? It seems you're knocking down the straw man that is limited precision floating point. One can demonstrate all kinds of contradictions if you compare computer math with real math, regardless of what theoretical basis you're using. If you take the idea that inexacts represent single real values, then all equations have to be qualified with "so long as the values and intermediate results remain within the precision the system provides." If instead you assume that inexacts represent ranges, then the qualification instead becomes "all values within the range are indistinguishable." In the above contradiction, x and y when represented on the computer are indistinguishable from 1.0, so without any steps at all you can conclude x = y = 1.0. This goes for any real number, not just 1.0. As the introduction to numbers in R5RS states It is important to distinguish between the mathematical numbers, the Scheme numbers that attempt to model them, the machine representations used to implement the Scheme numbers, and notations used to write numbers. Perhaps it is best to leave it this way and let individual people (and implementations) apply interpretations to those numbers as suits them. -- Alex