[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: straw-man [was Re: arithmetic issues]

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.



> From: bear <bear@xxxxxxxxx>
>> On Sat, 21 Jan 2006, Paul Schlie wrote:
>> - Upon more reflection, given that it's likely unreasonable to presume
>>  that an <exact> implementation must (or even could) reliably support
>>  infinitely precise calculations/representations, it must then support
>>  finite precision calculations, thereby necessitating its definition
>>  of overflow semantics, basically leaving the choice of either modular
>>  or saturating semantics; where either may be considered reasonable,
>>  it seems undisputed that modular semantics tend to be the simplest
>>  and most natural default of most machine and/or SW implementations,
>>  and does not preclude the throwing of a recoverable overflow exception
>>  if supported by the base implementation.
> 
> The thing is, I don't ever want it to be considered "wrong" for
> someone to add #e1.23456 and #e6.54321 and get exactly #e7.77777.
> I mean, that's what the numbers add up to, right?  And if the finite
> representation chosen by an implementor uses powers of ten rather
> than (or in addition to) powers of two, that's exactly what the
> answer will be.  If someone is relying on this answer to be inexact,
> or to exhibit a particular numeric error based on a presumed binary
> representation (ie, relying on the operation expressed by '+' to
> be some particular approximation of addition rather than addition
> itself) then while they might be right for a lot of particular
> implementations, they are wrong in first principles.

- The only point I was attempting to make is that if it's accepted
  that it's likely unreasonable to presume that any implementation
  is capable of reliably supporting indefinitely precise calculations
  and correspondingly representing their resultant values; it must
  be presumed that a finite practical precision limit exists, and
  therefore an overflow/underflow condition may result from an
  arbitrary calculation which should ideally be correspondingly
  defined regardless of what that precision bound may be defined
  as being for a given implementation; thereby although the result
  of the calculation may not be "arithmetically expected" due to
  an implementation's limited precision, it's result is deterministic
  at the bonds if it's capable precision in whatever base number system
  an implementation chooses (further implying the specification of it's
  base in addition to it's fractional and integer precision be
  determinable).

> Similarly, most of the computable reals have a finite, exact and
> reasonably short representation, although it may be a "tree"
> representation involving logarithms, exponents, square roots, factors
> of irrational constants like e and pi, etc.  Such representations
> are something used a lot by specialized mathematical applications like
> Macsyma, although they're something of a pain to implement the basic
> functions for, because people want correct answers.  No matter what
> representation scheme you pick, there'll be numbers you can't represent
> exactly - but in principle, I don't want to ever forbid the "generic"
> operations on exact numbers from returning exact results, period.

- Agreed, an exact implementation based on an integer multiple of the
  fractional division of pi, e, etc. may be very appropriate for various
  applications; but does not preclude the fact that values, including
  those not rational within that number system, exist which may overflow
  the representational precision supported by it's implementation; as
  the presumption of an infinitely precise implementation is not likely
  reasonable regardless of the numerical base chosen (it would seem?).

> If an implementor has gone out of his or her way to build a system
> in which say, the result of log2(327) is exact, I say more power to
> them and I don't want to see a bunch of requirements that can *only*
> be implemented effectively for ieee-float style representations.
> 
> I'd much rather see explicit operations like (ieee53 x) which
> returns the closest inexact number to x that is a member of the set
> of numbers representable as an ieee float with a 53-bit mantissa,

- which would seem to have the odd consequence of potentially introducing
  enormously large errors for otherwise "nearly exact" values if constrained
  to the potentially substantially lesser dynamic range and/or precision of
  an "inexact" representation, which seems rather counter productive for
  values otherwise chosen to be represented more exactly?

  (Which would seem to raise the question: what's the true purpose of an
  "exact" representation if not to enable the representation of values
   with greater precision than may be otherwise represented as an "inexact"
   value?  As in fact the true purpose of an "inexact" representation seems
   to be based on the premise that computational efficiency may be improved
   at the expense of numerical precision, thereby an "exact" implementation,
   should possibly be more correctly viewed as being simply "more precise"
   if it's implementation precision and/or dynamic range exceeds that of an
   "inexact" implementation, but not necessarily "exact")

> or (ieee53! x) which mutates x forcing it to be that number.
> If an implementation is concerned with speed, it's already using
> some format like this for all its inexact numbers, and these become
> the identity function and a no-op, respectively, get optimized out,
> and do not interfere with speed.  If an implementation is more
> concerned with correctness and uses macsyma-like numbers, then at
> least ordinary code is not subject to numerical errors caused by
> the choice of format unless the programmer explicitly requires
> that choice, and correctness does not suffer.

- It seems fairly simple to presume that values and/or calculations
  based on "inexact" values are meant to be performed "inexactly",
  and correspondingly those based on "exact" values are intended to be
  performed "more precisely", as otherwise they would have been specified
  as being "inexact".

> The abstract of SRFI-77 talks about the need for less variety and
> freedom in numeric implementation; Aside from the thought that the
> numeric tower short of polar-complex numbers should be required
> rather than recommended, I simply do not agree.  I see inexact
> formats, especially where exact results are possible and representable,
> as a source of mathematical errors, and I think that it is the
> implementor's responsibility, insofar as cleverness allows and
> insofar as s/he cares about correctness, to produce a system
> that limits mathematical errors to exactly those explicitly
> requested by the programmer.

- Although maybe I'm alone, it would seem that most ideally "exact"
  values should be simply viewed as being merely "precise" within the
  constraints specified by their implementation (numerical-base, fractional-
  precision-of-that-base, and maximum-integer-multiple-of-that-fractional-
  precision) of which multiple such types may be specified and used; and
 "inexact" values are specified by their implementations (numerical-base,
  fractional-precision-of-that-base, exponential-multiple-
  precision-of-that-base).

  Thereby an "inexact" value utilizes a window of precision within the
  dynamic range which a "exact" value would otherwise be fully precise
  within. (thereby simply is capable of representing only a subset of the
  values otherwise precisely representable within a corresponding "exact"
  implementation covering the same dynamic range, which thereby may justify
  concluding that an overflow/underflow of that dynamic range should yield
  equivalent saturated values?)