# Re: comparison operators and *typos

This page is part of the web mail archives of SRFI 73 from before July 7th, 2015. The new archives for SRFI 73 contain all messages, not just those from before July 7th, 2015.

```> From: Aubrey Jaffer <agj@xxxxxxxxxxxx>
>  | Date: Mon, 27 Jun 2005 02:29:12 -0400
>  | From: Paul Schlie <schlie@xxxxxxxxxxx>
>  |
>  | I wonder if the following may represent a reasonable balance between
>  | existing assumptions/practice/code and the benefits of a virtually
>  | bounded reciprocal real number system:
>  |
>  |   1/0    ==  inf   ; exact sign-less 0 and corresponding reciprocal.
>  |   1/0.0  ==  inf.0 ; inexact sign-less 0.0 and corresponding reciprocal.
>  |   1/-0   == -inf   ; exact signed 0, and corresponding reciprocal.
>  |   1/-0.0 == -inf.0 ; inexact signed 0, and corresponding reciprocal.
>  |   1/+0   == +inf   ; exact signed 0, and corresponding reciprocal.
>  |   1/+0.0 == +inf.0 ; inexact signed 0, and corresponding reciprocal.
>  |
>  |   (where sign-less infinities ~ nan's as their sign is ambiguous)
>  |
>  | And realize I've taken liberties designating values without decimal points
>  | as being exact, but only did so to enable their symbolic designation if
>  | desired to preserve the correspondence between exact and inexact
>  | designations. (as if -0 is considered exact, then so presumably must -1/0)
>  |
>  | Thereby one could define that an unsigned 0 compares = to signed 0's to
>  | preserve existing code practices which typically compare a value against
>  | a sign-less 0. i.e.:
>  |
>  |  (= 0 0.0 -0 -0.0) => #t
>  |  (= 0 0.0 +0 +0.0) => #t
>  |
>  |  (= -0 -0.0 +0 +0.0) => #f
>
> The `=' you propose is not transitive, which is a requirement of R5RS.

- then alternatively one could define:

(= -0 -0.0 0 0.0 +0 +0.0) => #t

while retaining the remaining relationships, as it seems
that = and < relationships need not be mutually exclusive?

>  | While preserving the ability to define a relative relationship between
>  | the respective 0 values:
>  |
>  |  (< 1/-0 -0 +0 1/+0) => #t
>  |
>  |  (<= 1/-0 1/-0.0 -0 -0.0 0 +0 +0.0 1/+0 1/+0.0) => #t
>  |
>  |  (= 1/0 1/0.0) => #t ; essentially nan's
>  |  (= 1/0 1/+0)  => #f ; as inf (aka nan) != +inf
>  |
>  | Correspondingly, it seems desirable, although apparently contentious:
>  |
>  |  1/0 == inf :: 1/inf == 0 :: 0/0 == 1/1 == inf/inf == 1
>
> Are you saying that (/ 0 0) ==> 1 or that (= 0/0 1)?

- sorry, actually meant:

1/+0 == +inf :: 1/+inf == +0 :: +0/+0 == 1/1 == +inf/+inf == 1

Where +-0 represent an infinitesimal 1/inf deviation about a pure 0,
where although the magnitude of inf is undefined, (= inf inf) => #t,
thereby all inf values considered to be equally distant from 0;
thereby correspondingly the value of a multivariable function (f x y ...)
is equivalent to to (f (+ x 1/inf) (+ y 1/inf) ...), i.e. the value of
the function where it's variables are considered to be converging
equidistantly about their values; as related to the below:

> Mathematical division by 0 is undefined; if you return 1, then code
> receiving that value can't detect that a boundary case occured.

- yes, as above; and corrected below for unsigned 0's and 0.0's:

1/0 == inf :: 1/inf == 0 :: 0/0 == inf/inf == ~1

where although inf equivalent in magnitude to +/-inf,
it's sign is is undefined, thereby similar to nan, with
the exception that if one were to introduce the convention
that '~' may designate an ambiguous sign then the result of
any division by inf or 0 may be considered to only yield
an ambiguous sign although not necessarily magnitude, in
in lieu of considering the value as undefined, i.e.

inf => ~inf               ; either +inf or -inf
(* 3 (/ 0 0)) => ~3       ; either   -3 or   +3, thereby:
(abs (* 3 (/ 0 0))) => +3

(as this is how an implementation would behave if it considered
+-inf and +-0 it's greatest and smallest represent-able but
non-accumulating values; which effectively enables calculations
to loose precision more gracefully, than falling of the edge of
the value system potentially resulting in a run-time fault.)

>  | and (although most likely more relevant to SRFI 70):
>  |
>  |   x^y == 1
>  |
>  | As lim{|x|==|y|->0} x^y :: lim{|x|==|y|->0} (exp (* x (log y))) = 1
>  |
>  | As it seems that the expression should converge to 1 about the
>  | limit of 0; as although it may be argued that the (log 0) -> -inf,
>  | it does so at an exponentially slower rate than it's operand,
>  | therefore: lim{|x|==|y|->0} (* x (log y)) = 0, and lim{|x|==|y|->0}
>  | (exp (* x (log y))) = (exp 0) = 1; and although it can argued that
>  | it depends on it's operands trajectories and rates, I see no valid
>  | argument to assume that it's operands will not approach that limit
>  | at equivalent rates from equidistances,
>
> That would mean that the program was computing some variety of x^x.
>
> Lets look at some real examples.  FreeSnell is a program which
> computes optical properties of multilayer thin-film coatings.
> It has three occurrences of EXPT:
>
>   opticolr.scm:152: (let ((thk (* (expt ratio-thk (/ (+ -1 ydx) (+ -1
> cnt-thk)))
>   opticolr.scm:173: (let ((thk (* (expt ratio-thk (/ (+ -1 ydx) (+ -1
> cnt-thk)))
>
> These two are computing a geometric sequence of thicknesses.  It is an
> error if either argument to EXPT is 0.
>
>   opticompute.scm:131: (let ((winc (expt (/ wmax wmin) (/ (+ -1 samples)))))
>
> This one computes a ratio for a geometric sequence of wavelengths.  It
> is an error if either argument to EXPT is 0.
>
> There is also one occurence of EXP, which computes the phase
> difference between reflected and/or transmitted paths:
>
>   fresneleq.scm:82:  (define phase (exp (/ (* +2i pi h_j n_j (cos th_j)) w)))
>
> Nearly all of the SLIB occurences of EXPT have at least one literal
> constant argument.  In these cases, (expt 0 0) signaling an error
> would catch coding errors.  MODULAR:EXPT tests for a zero base (and
> returns 0) before calling EXPT.

- ??? The responsibility of an implementation's arithmetic implementation
is to be generically as correct and consistent as reasonably possible.
If slib chooses to optionally signal a runtime error for any arbitrary
set of argument values, that's it's prerogative; but should have nothing
to do with what the arithmetic value of (expt 0 0) or any other function
is most consistently defined as being. (all arithmetic functions should
always return values).

As arguably, it may be just as significant an error for an application
to unexpectedly yield a negative result from the addition of two numbers,
which is it's responsibility to check for; not presume that + should yield
an error upon producing a negative result.

>  | which will also typically yield the most useful result, and tend
>  | not to introduce otherwise useless value discontinuities and/or
>  | ambiguities.
>
> Grepping through a large body of Scheme code found no use of EXPT
> where the two arguments are related.

- which has nothing to do with anything, functions should be considered
to be evaluated about static points:

i.e. (f x y) == (f (+ x ~1/inf) (+ y ~1/inf))

there's nothing special about 0, as any function may impose relative
trajectories for their arguments:

(define (f x y) (/ x (* y y y (- y 1)))

as such the only consistent thing that an implementation can warrant
is that all primitive arithmetic expressions are evaluated equivalently
about the static values passed to them, independently of whether or not
the values passed to them have begun to loose precision due to the
limited dynamic range of an implementation's number system. Thereby at
least as a function's arguments begin to loose precision, the function
correspondingly degrades in precision correspondingly and consistently,
without after already yielding relatively inaccurate results decides it
doesn't know the answer at all, or chooses to return a value which is
inconsistent with it's previous results. (admittedly in my opinion)

> (expt 0 0) ==> 1 is one of the possibilities for SRFI-70.  But I am
> leaning toward the "0/0 or signal an error" choice to catch the rare
> coding error.

- Again, in just my opinion, I'd rather a function return the most
likely useful static value as a function of it's arguments, rather
than it trying to pretend it knows something about the arguments
passed to it and potentially generating a runtime fault.

However it does seem potentially useful to be optionally warned
whenever the precision of a primitive calculation drops below some
minimal precision; i.e. it's likely much more useful to know when a
floating point value is demoralized (as it means that the value now
no longer has a represent-able reciprocal, or when an argument to an
addition is less than the represented precision of the other operand,
as these are the type of circumstances which result in inaccuracies,
which by the time one may underflow to 0, or overflow to inf, and hope
it gets trapped by some misguided function implementation which should
have simply just returned the correct value based upon the arguments it
was given and have the application check for what it believes is correct,
it's already much too late, as regardless of whether some implementation's
arithmetic system discontinuity was ticked, the results of a calculation

>  | Where I understand that all inf's are not strictly equivalent, but
>  | when expressed as inexact values it seems more ideal to consider
>  | +-inf.0 to be equivalent to the bounds of the inexact
>  | representation number system, thereby +-inf.0 are simply treated as
>  | the greatest, and +-0.0 the smallest representable inexact value;
>
> <http://srfi.schemers.org/srfi-70/srfi-70.html#6.2.2x> shows that
> inexact real numbers correspond to intervals of the real number line.
> Infinities corresponding to the remaining half-lines gives very clean
> semantics for inexact real numbers.  Infinitesimals (+-0.0) are a
> solution in search of a problem.

- only if it's not considered important that inexact infinities have
corresponding reciprocals; which seems clearly desirable as otherwise
any expression which may overflow the dynamic range of the number system
can't preserve the sign of it's corresponding infinitesimal value, which
if not considered important, there's no reason to have signed infinities,
either, etc. ?

>  | as +-1/0 and +-0 may be considered abstractions of exact infinite
>  | precision values if desired.
>  |
>  | However as it's not strictly compatible with many existing floating
>  | point implementations, efficiency may be a problem? (but do like
>  | it's simplifying symmetry).
>  |
>  |

```