[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: comparison operators and *typos



> From: Aubrey Jaffer <agj@xxxxxxxxxxxx>
>  | Date: Mon, 27 Jun 2005 18:09:04 -0400
>  | From: Paul Schlie <schlie@xxxxxxxxxxx>
>  | 
>  | > From: Aubrey Jaffer <agj@xxxxxxxxxxxx>
>  | >  | Date: Mon, 27 Jun 2005 02:29:12 -0400
>  | >  | From: Paul Schlie <schlie@xxxxxxxxxxx>
>  | >  | ...
>  | >  | Thereby one could define that an unsigned 0 compares = to signed 0's
>  | >  | to preserve existing code practices which typically compare a value
>  | >  | against a sign-less 0. i.e.:
>  | >  | 
>  | >  |  (= 0 0.0 -0 -0.0) => #t
>  | >  |  (= 0 0.0 +0 +0.0) => #t
>  | >  | 
>  | >  |  (= -0 -0.0 +0 +0.0) => #f
>  | > 
>  | > The `=' you propose is not transitive, which is a requirement of R5RS.
>  | 
>  | - then alternatively one could define:
>  | 
>  |   (= -0 -0.0 0 0.0 +0 +0.0) => #t
>  | 
>  |   while retaining the remaining relationships, as it seems
>  |   that = and < relationships need not be mutually exclusive?
> 
> R5RS says:
> 
>   -- procedure: = z1 z2 z3 ...
>   -- procedure: < x1 x2 x3 ...
>   -- procedure: > x1 x2 x3 ...
>   -- procedure: <= x1 x2 x3 ...
>   -- procedure: >= x1 x2 x3 ...
>       These procedures return #t if their arguments are (respectively):
>       equal, monotonically increasing, monotonically decreasing,
>       monotonically nondecreasing, or monotonically nonincreasing.
> 
>       These predicates are required to be transitive.
> 
> Equal cannot be monotonically increasing.

- why not? I was under the impression that the transitive requirement
  applied to the elements of a particular predicate, but not necessarily
  apply across distinct predicates? As the ordering which may exist for =
  need not apply to <=, nor are the ordered members of <= necessarily
  valid for = ? (however the context of the thought was to enable the
  comparison of 0 against other zeros as being = as a possible means of
  preserving existing code practices while allowing the introduction of
  alternate forms of 0 with more specific meanings)

>  ...
>  | 
>  | > Mathematical division by 0 is undefined; if you return 1, then code
>  | > receiving that value can't detect that a boundary case occured.
>  | 
>  | - yes, as above; and corrected below for unsigned 0's and 0.0's:
>  | 
>  |   1/0 == inf :: 1/inf == 0 :: 0/0 == inf/inf == ~1
>  | 
>  |   where although inf equivalent in magnitude to +/-inf,
>  |   it's sign is is undefined, thereby similar to nan, with
>  |   the exception that if one were to introduce the convention
>  |   that '~' may designate an ambiguous sign then the result of
>  |   any division by inf or 0 may be considered to only yield
>  |   an ambiguous sign although not necessarily magnitude, in
>  |   in lieu of considering the value as undefined, i.e.
>  | 
>  |   inf => ~inf               ; either +inf or -inf
>  |   (* 3 (/ 0 0)) => ~3       ; either   -3 or   +3, thereby:
>  |   (abs (* 3 (/ 0 0))) => +3
> 
> So ~ generates an algebraic field extension attaching the roots of
> x^2=1.  Note that ~ is not a real number because it doesn't fit in the
> total ordering.

- yes, essentially ~ designates a value's sign-less magnitude and arguably
  may be thought of as a set of numbers with equivalent magnitudes and
  differing signs; which I suspect could be extended to complex values
  such that the root of x^2=-1 are 0~1i. (and could be defined to have an
  ordering relative to each other, as well as within the set of real numbers
  if considered to be a set of values (< -4 ~2) for example would certainly
  be true, where (< -1 ~2) would likely not be as (< -1 -2 +2) would not be
  valid.)

>  |   (as this is how an implementation would behave if it considered
>  |    +-inf and +-0 it's greatest and smallest represent-able but
>  |    non-accumulating values; which effectively enables calculations
>  |    to loose precision more gracefully, than falling of the edge of
>  |    the value system potentially resulting in a run-time fault.)
> 
> Section 6.2.2 Exactness says:
> 
>   If two implementations produce exact results for a computation that
>   did not involve inexact intermediate results, the two ultimate
>   results will be mathematically equivalent.
> 
> So loss of precision must not be platform dependent; thresholds of
> "greatest and smallest represent-able" values can not affect
> precision.  Losing precision in calculation is an attribute of inexact
> numbers.

- yes, and apologize for not following my own conventions again, the
  above should have referred to "+-inf.0 and +-0.0" as inexact values.

  (where +-inf would be an exact reciprocal of +-0, although having
   an undefined magnitude; just as the reciprocal of an exact 0 could
   be thought of as being equivalent to ~0, and having an reciprocal
   value of ~1/0 = ~inf)

>  | > ...
>  | > Nearly all of the SLIB occurences of EXPT have at least one
>  | > literal constant argument.  In these cases, (expt 0 0) signaling
>  | > an error would catch coding errors.  MODULAR:EXPT tests for a
>  | > zero base (and returns 0) before calling EXPT.
>  | 
>  | - ??? The responsibility of an implementation's arithmetic
>  | implementation is to be generically as correct and consistent as
>  | reasonably possible.  If slib chooses to optionally signal a
>  | runtime error for any arbitrary set of argument values, that's it's
>  | prerogative; but should have nothing to do with what the arithmetic
>  | value of (expt 0 0) or any other function is most consistently
>  | defined as being.
> 
> My point is that (expt 0 0) is unlikely to occur when EXPT is being
> used as a continuous function; its occurrences will be exponentiating
> integers.  In the integer context, arguments about limits of
> continuous functions are irrelevant.

- Typically it seem more broadly accepted that (expt 0 0) == 1 particularly
  for integers, as the subject of multivariate trajectories are irrelevant.

  Although I believe I do understand your point that most legitimate uses
  of (expt x y) will tend to have non-zero valued arguments, and see nothing
  wrong with the creation of libraries which may try to assist the debugging
  of code by optionally monitoring its arguments; I don't believe its a
  good idea to presume any arithmetic function should yield an error in lieu
  of an arithmetic result by default.

>  |   (all arithmetic functions should always return values).
> 
> 6.2.3 Implementation restrictions:
> 
>   If one of these procedures is unable to deliver an exact result when
>   given exact arguments, then it may either report a violation of an
>   implementation restriction or it may silently coerce its result to
>   an inexact number.
> 
> Always returning a value is a stronger requirement than R5RS or
> SRFI-70, which gives the implementation a choice between returning 0/0
> and signaling an error for (/ 0.0 0.0). Can you justify that mandate?

- only on the basis that errors/exceptions are by their nature disruptive
  to both the expressed (and presumably intended) code and/or otherwise
  control flow of the program, therefore all standard functions should be
  defined to yield values, which may be checked explicitly as may desired
  for application specific expectations within the applications code itself,
  or encapsulated in corresponding application specific wrappers which may
  then throw exceptions, and/or shape the results to the applications more
  specific needs and/or expectations. (but as noted above, I think its
  reasonable to define a mechanism by which an implementation to optionally
  enable functions to report and/or throw warnings/errors, but not in lieu
  of their returning values by default).
  
> Do you consider QUOTIENT, MODULO, and REMAINDER arithmetic?

- yes, and believe all should return values by default (even for invalid
  operands, where by default it could be defined that a function may
  return a <void> type or something similar by default, or signal an error
  when explicitly enabled to do so; which I know is a bit different)

  i.e. (car 3) => <void>

>  | > Grepping through a large body of Scheme code found no use of EXPT
>  | > where the two arguments are related.
>  | 
>  | - which has nothing to do with anything, functions should be considered
>  |   to be evaluated about static points:
>  | 
>  |   i.e. (f x y) == (f (+ x ~1/inf) (+ y ~1/inf))
> 
> The integer uses for EXPT should also be considered.

- yes, but there's no ambiguity, as (expt 0 0) == 1 clearly for integers
  it would seem. 

>  |   there's nothing special about 0, as any function may impose
>  |   relative trajectories for their arguments:
>  | 
>  |   (define (f x y) (/ x (* y y y (- y 1)))
>  | 
>  |   as such the only consistent thing that an implementation can
>  |   warrant is that all primitive arithmetic expressions are
>  |   evaluated equivalently about the static values passed to them,
>  |   independently of whether or not the values passed to them have
>  |   begun to loose precision due to the limited dynamic range of an
>  |   implementation's number system. Thereby at least as a function's
>  |   arguments begin to loose precision, the function correspondingly
>  |   degrades in precision correspondingly and consistently, without
>  |   after already yielding relatively inaccurate results decides it
>  |   doesn't know the answer at all, or chooses to return a value
>  |   which is inconsistent with it's previous results. (admittedly in
>  |   my opinion)
> 
> SRFI-73 is about exact numbers.  EXPT will only return exact numbers
> for exact arguments.  Loss of precision means inexact numbers.

- yes I apologize for going off on a tangent, I was considering inexact
  values. (however in the context of exact values, it seems that the only
  way to generate +-inf or +-0, would be directly from the literal use
  of such an abstract value in an expression itself, as it would seem
  impossible to generate an exact infinite value as a function of finite
  value expressions?)

>  | > (expt 0 0) ==> 1 is one of the possibilities for SRFI-70.  But I
>  | > am leaning toward the "0/0 or signal an error" choice to catch
>  | > the rare coding error.
>  | 
>  | - Again, in just my opinion, I'd rather a function return the most
>  |   likely useful static value as a function of it's arguments, rather
>  |   than it trying to pretend it knows something about the arguments
>  |   passed to it and potentially generating a runtime fault.
>  | 
>  |   However it does seem potentially useful to be optionally warned
>  |   whenever the precision of a primitive calculation drops below
>  |   some minimal precision; i.e. it's likely much more useful to know
>  |   when a floating point value is demoralized (as it means that the
>  |   value now no longer has a represent-able reciprocal, or when an
>  |   argument to an addition is less than the represented precision of
>  |   the other operand, as these are the type of circumstances which
>  |   result in inaccuracies, which by the time one may underflow to 0,
>  |   or overflow to inf, and hope it gets trapped by some misguided
>  |   function implementation which should have simply just returned
>  |   the correct value based upon the arguments it was given and have
>  |   the application check for what it believes is correct, it's
>  |   already much too late, as regardless of whether some
>  |   implementation's arithmetic system discontinuity was ticked, the
>  |   results of a calculation are at best already suspect.
> 
> Bear@xxxxxxxxx is also interested in specifying precision.  See
> <http://srfi.schemers.org/srfi-70/mail-archive/msg00088.html> about an
> idea for latent precisions.

- thank you.

>  | >  | Where I understand that all inf's are not strictly equivalent,
>  | >  | but when expressed as inexact values it seems more ideal to
>  | >  | consider +-inf.0 to be equivalent to the bounds of the inexact
>  | >  | representation number system, thereby +-inf.0 are simply
>  | >  | treated as the greatest, and +-0.0 the smallest representable
>  | >  | inexact value;
>  | > 
>  | > <http://srfi.schemers.org/srfi-70/srfi-70.html#6.2.2x> shows that
>  | > inexact real numbers correspond to intervals of the real number line.
>  | > Infinities corresponding to the remaining half-lines gives very clean
>  | > semantics for inexact real numbers.  Infinitesimals (+-0.0) are a
>  | > solution in search of a problem.
>  | 
>  | - only if it's not considered important that inexact infinities have
>  |   corresponding reciprocals;
> 
> Inexact infinities have reciprocals: zero.  Their reciprocals are not
> unique, but that is already the case with IEEE-754 floating-point
> representations:

- yes, among other idiosyncrasies.

>   179.76931348623151e306  ==> 179.76931348623151e306
>   179.76931348623157e306  ==> 179.76931348623157e306
>   (/ 179.76931348623151e306)  ==> 5.562684646268003e-309
>   (/ 179.76931348623157e306)  ==> 5.562684646268003e-309
> 
>  |   which seems clearly desirable as otherwise any expression which
>  |   may overflow the dynamic range of the number system can't
>  |   preserve the sign of it's corresponding infinitesimal value,
>  |   which if not considered important, there's no reason to have
>  |   signed infinities, either, etc. ?
> 
> #i+1/0 is the half-line beyond the largest floating-point value.  The
> projection of that interval through / is a small open interval
> bordering 0.0.  That interval overlaps the interval of floating-point
> numbers closer to 0.0 than to any other.  Thus the reciprocal of
> #i+1/0 is 0.0.

- but the problem seems to be the reciprocal of #i-1/0?
  And it's reciprocal?, which should be where one began?

  (where it one introduces -0.0 then 0.0 is implied as being +0.0
  leaving one with ether a + or - 0, but nothing which is either?
  unless one introduces yet another 0, 0 (or ~0 hypothetically,
  which then implies ~inf)?