[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: reading NaNs

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.




Okay...

On a rereading of SRFI-77, I want to point out a couple things.

First, the new external representation where you have a suffixed
bar and decimal number indicating the bits of precision:  This is
a good and necessary extension to the information in inexact
constants, and I applaud it.

But we've got what is essentially one piece of information all over
the place now.  The inexact prefix, the exponent marker, and this new
suffix all signify inexactness and information about inexactness.
Since a lot of representations already support #<decimal>R prefixes
for radix, could we consider #<decimal>I prefixes for inexactness?

This would eliminate the need for the suffix entirely, and put the primary
inexactness information in one place.  The exponent markers are still
somewhat useful as ways to specify the use of indeterminate amounts of
precision that happen to be efficient on the current hardware.  So somebody
who needed only a little precision and wanted whatever the system found
"easy" or "efficient" could write 6.0F0, and somebody who knows that his
algorithm won't work if he has less than 16 bits of precision can write
#16i6.0.

I think you should specify that an implementation may use more bits of
precision than requested, and introduce some way to request that an
error be signalled if the implementation must use less.

Regarding "safe" and "unsafe" mode;  I think that "unsafe" mode should
_allow_ implementations to skip checks, not _require_ them to skip
checks.  Code that is incorrect in safe mode is still incorrect in unsafe
mode, and we should not provide a canonical way to run unsafe or incorrect
code.  Additionally, it is a lower initial bar for implementors; they can
get the system working correctly (in safe mode) and provide a trivial
unsafe mode (equal to their safe mode) to start with.  Finally, we can't
really tell in advance which checks can or ought to be done or skipped;
as compiler technology advances or as hardware advances some checks may
become "free" either at compile-time or in hardware at runtime.
Theoretically, some checks may even acquire negative cost if the hardware
uses them as "cue" indicators for heuristic branch predictions or prefetching.

Your redefinition of eqv? makes it the case that:

(eqv? #6.0L0 #6.0S0) => #t on implementations that use a single
floating-point representation size and #f on implementations that use
multiple floating-point representation sizes (because procedures like +
can produce results differing in precision depending on the precision
of its arguments).

So we have a situation where, first, the results of eqv? are specified
but implementation-dependent, and second, two numbers can be = without
being eqv?.  This is consistent with Lucier's proposal, which you
mention at the end of the SRFI, but you don't highlight the difference
anywhere.  I think I agree with your rationale and I think I agree
with the results and the respecification of eqv?; but the
ramifications on numerically equal numbers of different precisions is
not immediately obvious, so I wanted to point it out.

I strongly disagree with the idea of mixed exactness in the real and
imaginary parts of a complex number.  5.0+3i and 5+3.0i are the same
inexact number and should not be treated differently.

I would suggest requiring an error to be signalled if inexact->exact
gets an argument greater or less than any exact number representable
by the implementation.  Likewise if exact->inexact recieves an argument
outside the range representable as inexact numbers in the implementation.

Your formulation permits a fixnum range [0,1] and then states that
the modular mathematics primitives perform math modulo hi-lo+1.  I
first thought, somewhat obtusely, 'What does it mean to perform
mathematics modulo zero??' After a fractional second's thought, it
became clear that you meant (hi-lo)+1, not hi-(lo+1).  Consider using
fully parenthesized notation to avoid misinterpretation.  Remember
to also use it in your description of the functions that do modular
operations.

I still think that bitwise operations on numbers are incorrect.  Bitwise
operations should operate on bitvectors, not on numbers.  You are not
using them as numbers when you do bit operations on them, and their
identity as numbers does not give the length of the bitvector you're
using them for. This is faint and fuzzy thinking inherited from C and
confuses bit representation with semantics.

Absolute value is just a special case of complex magnitude, restricted
to the various ranges. In fact, (define abs magnitude) works just fine.
It didn't seem redundant in R5RS because complex numbers (and therefore
the magnitude accessor) weren't required in R5RS.  In a situation where
the whole tower is required, it seems redundant.

According to your definition of real? ...
 (real? z) <==> (let ((im (imag-part z))) (and (zero? im)(exact? im)))
but simultaneously,
(real? +nan.0) ==> #t.
This implies that (imag-part +nan.0) is an exact zero, which seems wrong,
although consistent with your declaration of NaN as a real number of
indeterminate value.  Are complex operations constrained to return some
different kind of NaN, or do their results get coerced to the real line
in the event of an error?

				Bear