[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Some preliminary comments

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.



I thank the authors of this proposal for their work.

I'm not entirely sure that I understand the the thrust of this proposal, or fully comprehend what this proposal encompasses, but I would like to go through it carefully and propose certain questions and comments.



> Should the R5RS procedures for generic arithmetic (e.g. +) remain in R6RS? Here are five possible answers, phrased in terms of the + procedure:
>
>       1. + is not defined in R6RS.
> 2. + is defined to be a synonym for the ex+, so its domain is restricted to exact arguments, and always returns an exact result. > 3. + is defined as the union of the ex+ and in+ procedures, so all of its arguments are required to have the same exactness, and the exactness of its result is the same as the exactness of its arguments. > 4. + is defined as in R5RS, but with the increased portability provided by requiring the full numeric tower. This alternative is described in the section R5RS-style Generic Arithmetic. > 5. + is defined to return an exact result in all cases, even if one or more of its arguments is inexact. This alternative is described in the section Generic Exact Arithmetic.
>
>    Will Clinger prefers the 4th possibility, Mike Sperber the 5th.

I have real difficulties with 5, which I hope to expand on at a later time. Part of the problem is that 5 assumes that inexact->exact gives reasonable results on inexact arguments, and I can't see how this can be true if, for example, inexacts are implemented as the computable reals.

> The real?, rational?, and integer? predicates must return false for complex numbers with an imaginary part of inexact zero, as non- realness is now contagious. This causes possibly unexpected behavior: `(zero? 0+0.0i)' returns true despite `(integer? 0+0.0i)' returning false. Possibly, new predicates realistic?, rationalistic?, and integral? should be added to say that a number can be coerced to the specified type (and back) without loss. (See the Design Rationale.)

I don't like the word contagious; I cannot think of a rigorous definition in many contexts (including this one).

> Most Scheme implementations represent an inexact complex number as a pair of two inexact reals, representing the real and imaginary parts of the number, respectively. Should R6RS mandate the presence of such a representation (while allowing additional alternative representations), thus allowing it to more meaningfully discuss semantic issues such as branch cuts?

Yes.

> Should `(floor +inf.0)' return +inf.0 or signal an error instead? (Similarly for ceiling, flfloor, infloor, etc.)

It should *be* an error, whether it should *signal* an error is another matter.

> The bitwise operations operate on exact integers only. Should they live in the section on exact arithmetic?

I don't see this as an important issue?

>    Should they carry ex prefixes?

no.

>     Or should they be extended to work on inexact integers as well?

Just exact integers.

> +nan.0 represents the result of (/ 0.0 0.0), and may represent other NaNs as well. (This SRFI does not require read/write invariance for NaNs.)

OK, but writing a NaN should give whatever info is in the significand. There seems to be no way to get that information using Scheme procedures described in this document.

Basically, I believe that the programming languages "Scheme", which we can specify, should "play well with others", where "others" include hardware and OS runtime libraries, which are specified either by hardware companies, or by other standards bodies. The hardware and runtime libraries of different systems generate different NaNs, and if we're going to go to this amount of trouble to add what I hope will be useful flonum arithmetic to Scheme, some way should be provided in Scheme to distinguish NaNs that are different (in their precision, range, or mantissa, i.e., according to the IEEE standard) and to determine whether two are the same.

> Implementations that use binary floating point representations of real numbers should represent x|p using a p-bit significand if practical, or by a greater precision if a p-bit significand is not practical, or by the largest available precision if p or more bits of significand is not practical within the implementation.
>
> Note: The precision of a significand should not be confused with the number of bits used to represent the significand. In the IEEE floating point standards, for example, the significand's most significant bit is implicit in single and double precision but is explicit in extended precision. Whether that bit is implicit or explicit does not affect the mathematical precision. In implementations that use binary floating point, the default precision can be calculated by calling the following procedure:
>
>        (define (precision)
>          (do ((n 0 (+ n 1))
>               (x 1.0 (/ x 2.0)))
>            ((= 1.0 (+ 1.0 x)) n)))
>
> Note: When the underlying floating-point representation is IEEE double precision, the |p suffix should not be omitted for all cases: Denormalized numbers have diminished precision, and therefore should carry a |p suffix with the actual width of the signficand.

It seems ironic to me that this program itself assumes generic arithmetic.

It is not clear to me how, under this proposal, a Scheme will implement floating-point arithmetic with more than one precision. This is an important part of some numerical algorithms (iterative refinement in solving linear systems, for example).

The first paragraph seems to make the |p notation meaningless; it seems to specify nothing about the relationship between p and the precision of the resulting flonum.

> Implementations are not required to use floating-point representations, but every implementation is required to designate a subset of its inexact reals as flonums, and to convert certain external representations into flonums.

What does this mean? In particular, can this subset be empty? Is that what "Implementations are not required to use floating-point representations" means?

> If a <decimal 10> does not contain a non-empty <mantissa width> and does not contain one of the exponent markers s, f, d, or l, but does contain a decimal point or the exponent marker e, then it is an external representation for a flonum. Furthermore inf.0, +inf. 0, -inf.0, nan.0, +nan.0, and -nan.0 are external representations for flonums. Some or all of the other external representations for inexact reals may also represent flonums, but that is not required by this SRFI.
>
> If a <decimal 10> contains a non-empty <mantissa width> or one of the exponent markers s, f, d, or l, then it represents an inexact number, but does not necessarily represent a flonum.

Wow; what does this mean? The difference between flonums and inexacts is confusing to me. (Sorry for "surfer dude" mode here.)

>    Safe and Unsafe Mode

> This SRFI allows a Scheme implementation to run code in one of two global modes: "safe" and "unsafe" mode. These affect the fixnum and the flonum operations. In safe mode, these operations must check that their arguments are actually fixnums or flonums respectively, or perform possible additional checking as required by the specifications of the operations. In unsafe mode, these operations must provide no such checking. This distinction allows an implementation to generate efficient numerical code at the cost of avoiding run-time checking. The R6RS should require a Scheme implementation to provide the safe mode.

I presume a Scheme "implementation" may include a compiler. I can't imagine that the requirement that "In unsafe mode, these operations must provide no such checking" means that a compiler cannot use dataflow and control flow information to signal an error in code, rather than compiling it to nonsense and deliberately crashing a program.


>    Equivalence of Objects
>
> The R6RS specification of eqv? for numbers should be changed as follows.
>
>    The eqv? procedure returns #t if:
>
> * obj1 and obj2 are both exact numbers, and are numerically equal (see =, section see section 6.2 Numbers). > * obj1 and obj2 are both inexact numbers, are numerically equal (see =, section see section 6.2 Numbers), and yield the same results (in the sense of eqv?) when passed as arguments to any other procedure that can be defined as a finite composition of Scheme's standard arithmetic procedures.
>
>    The eqv? procedure returns #f if:
>
> * one of obj1 and obj2 is an exact number but the other is an inexact number. > * obj1 and obj2 are numbers for which the = procedure returns #f. > * obj1 and obj2 yield different results (in the sense of eqv?) when passed as arguments to any other procedure that can be defined as a finite composition of Scheme's standard arithmetic procedures.

First, I presume that ", or" should be added to the end of the first item of the first list and to the ends of the first and second items of the second list.

OK, I just went through the whole document and checked for every instance of = and could not find out whether

(let ((x (/ 0. 0.)))
  (= x x))

returns #t or #f. It is not useful for it to return #t if IEEE arithmetic is the underlying inexact representation and "=" means "IEEE arithmetic =". So a definition of eqv? in terms of = seems incomplete. (Or does = mean fl= here?)

The definition of eqv? is recursive for inexact numbers, and I don't see where the recursion is based.

If we're going to go to all this trouble to insert flonums (and I presume the main motivation is to have IEEE flonums) into Scheme, we should add the inquiry functions of the IEEE standard (which I can't look up at the moment, because it's not online) that return various properties of the part of an IEEE flonum.

>    Numerical Type Predicates
>    ...
> Note: The behavior of these type predicates on inexact numbers is unreliable, because any inaccuracy may affect the result.

Couldy you explain this statement?  Or give an example?

>    Integer Division
>    ...

Why div+mod for reals and quotient+remainder for exact integers?

I'm losing it a bit; are the procedures in this section defined for inexact numbers? If so, then the requirement for div+mod that

x1 = nd * x2 + xm.

may not be realizable due to precision issues; perhaps a proof is needed here?

The requirement that the sign of the second argument acts as a selector to decide what function div+mod computes seems to give extra meaning to (negate y) that should not be there.

>    Fixnums
>    ...
> Every implementation must define its fixnum range as a closed interval [lo, hi] such that lo and hi are (mathematical) integers with lo <= 0 < 1 <= hi. Every mathematical integer within an implementation's fixnum range must correspond to an exact integer that is representable within the implementation. The fixnum operations of an implementation will perform arithmetic modulo hi-lo+1.

In mathematics, "arithmetic modulo hi-lo+1" means that results go from 0 to hi-lo; I can't imagine that this is what is wanted here.

As a practical matter, signed integer arithmetic on all modern processors wraps, so that

hardware-signed-+ (2^31-1) 1  => - 2^31,

yet the C standard says that such behavior is undefined. The C standard does require that *unsigned* arithmetic wraps

hardware-unsigned-+ (2^32-1) 1 => 0

which is probably the reason why hardware signed arithemtic wraps the way it does (so the processor can use a single + operation for both signed and unsigned arithmetic).

> <definition of many operations that naturally can take multiple arguments to take only two arguments>

Let's let the interpreter or compiler worry about doing the folding of these operations; I don't see any reason to require the programmer to do it by hand.

>    library procedure: fxabs fx

> This procedure returns `(fx- 0 fx)' if fx is negative, fx otherwise.

So, do you want

(fxabs least-fixnum)  ==> least-fixnum

which is what this specifies?

>    procedure: fxdiv+mod fx1 fx2
>    library procedure: fxdiv fx1 fx2
>    library procedure: fxmod fx1 fx2
>    library procedure: fxquotient fx1 fx2
>    library procedure: fxmodulo+remainder fx1 fx2
>    library procedure: fxmodulo fx1 fx2
>    library procedure: fxremainder fx1 fx2
>
> These procedures implement number-theoretic integer division modulo hi-lo+1. See the section on Integer Division.

This seems to be a laundry list of ad-hoc, incompletely specified procedures. I really don't know what you mean by these.

>    procedure: fxbitwise-not fx

> Returns the fixnum which is the one's-complement of its argument. congruent mod hi-lo+1.

I believe it should be "ones-complement" and that the "congruent mod hi-lo+1" is unnecessary.

>    procedure: fxarithmetic-shift fx1 fx2

> This procedure conceptually shifts the two's complement representation of fx1 fx2 bits left when fx2 > 0, and -fx2 bits right when fx2 < 0, extending the sign. (It returns fx1 when fx2 = 0.) It returns the result of that shift congruent mod hi-lo+1.

Do you mean "(eq? fx1 (fxarithmetic-shift fx1 0))"?

>    procedure: fl= fl1 fl2
>    procedure: fl< fl1 fl2
>    procedure: fl<= fl1 fl2
>    procedure: fl> fl1 fl2
>    procedure: fl>= fl1 fl2
>
> These procedures return #t if their arguments are (respectively): equal, monotonically increasing, monotonically decreasing, monotonically nondecreasing, or monotonically nonincreasing, #f otherwise. These predicates are required to be transitive.
>
>    (fl= +inf.0 +inf.0)           ==>  #t
>    (fl= -inf.0 +inf.0)           ==>  #f
>    (fl= -inf.0 -inf.0)           ==>  #t
>    (fl= 0.0 -0.0)                ==>  unspecified
>    (fl< 0.0 -0.0)                ==>  unspecified
>    (fl= +nan.0 fl)               ==>  unspecified
>    (fl= +nan.0 fl)               ==>  unspecified
>    (fl< +nan.0 fl)               ==>  unspecified
>
> The following behavior is strongly recommended but not required by R6RS:
>    (fl= 0.0 -0.0)                 ==>  #t
>    (fl< -0.0 0.0)                 ==>  #f
>    (fl= +nan.0 +nan.0)            ==>  #f
>
>    procedure: flinteger? fl
>    procedure: flzero? fl
>    procedure: flpositive? fl
>    procedure: flnegative? fl
>    procedure: flodd? ifl
>    procedure: fleven? ifl
>    procedure: flnan? fl

> These numerical predicates test a flonum for a particular property, returning #t or #f. Flinteger? tests it if the number is an integer, flzero? tests if it is fl= to zero, flpositive? tests if it is greater than zero, flnegative? tests if it is less than zero, flodd? tests if it is odd, fleven? tests if it is even, flnan? tests if it is a NaN.
>
>    (flnegative? -0.0) ==> #f
>
> Note: `(flnegative? -0.0)' must return #f, else it would lose the correspondence with (fl< -0.0 0.0), which is #f according to the IEEE standards.
>
>    procedure: flmax fl1 fl2
>    procedure: flmin fl1 fl2
>
>    These procedures return the maximum or minimum of their arguments.
>
>    procedure: fl+ fl1 fl2
procedure: fl- fl1 fl2
>    procedure: fl* fl1 fl2
procedure: fl/ fl1 fl2
>
> These procedures return the flonum sum, difference, product, or quotient of their flonum arguments. In general, they should return the flonum that best approximates the mathematical sum, difference, or quotient of their arguments. (For implementations that represent flonums as IEEE binary floating point numbers, the meaning of "best" is reasonably well-defined by the IEEE standards.)
>
> For undefined quotients, fl/ behaves as specified by the IEEE standards:
>
>    (fl/ 1.0 0.0)  ==> +inf.0
>    (fl/ -1.0 0.0) ==> -inf.0
>    (fl/ 0.0 0.0)  ==> +nan.0

I realize that "A foolish consistency is the hobgoblin of little minds", but why in heaven's name do you justify some requirement by saying "we must follow IEEE arithmetic" and at the same time totally disregard other IEEE arithmetic requirments? Either implement the IEEE spec for floating point arithmetic in Scheme flonum arithmetic or forget it.

>    library procedure: flnumerator fl
>    library procedure: fldenominator fl
>
> These procedures return the numerator or denominator of their argument as a flonum; the result is computed as if the argument was represented as a fraction in lowest terms. The denominator is always positive. The denominator of 0 is defined to be 1.
>
>    (flnumerator +inf.0)           ==>  +inf.0
>    (flnumerator -inf.0)           ==>  -inf.0
>    (fldenominator +inf.0)         ==>  1.0
>    (fldenominator -inf.0)         ==>  1.0
>    (flnumerator 0.75)             ==>  3.0 ; example
>    (fldenominator 0.75)           ==>  4.0 ; example
>
> The following behavior is strongly recommended but not required by R6RS:
>
>    (flnumerator -0.0)             ==> -0.0

If +inf.0 and +nan.o are not rational, I don't think it is reasonable for them to be in the domain of flnumerator and fldenominator.

>    procedure: flsqrt fl
>
> Returns the principal square root of z. For a negative argument, the result may be +nan.0, or may be some meaningless flonum. With flexp and fllog defined as above, the value of `(flsqrt fl)' is defined as `(flexp (fl/ (fllog fl) 2.0))'.

Aren't the last two statements contradictory?

As a general question---why define the flonum versions of the elementary functions to possibly return complex arguments rather than to have restricted ranges or to return NaNs for arguments that would not give a real result? If the goal of adding flonum arithmetic is speed, why define these functions in such a way that the built-in hardware or OS runtime library routines cannot be used directly?

>    procedure: flexpt fl1 fl2
>
>    Returns fl1 raised to the power fl2. For nonzero fl1
>
>    fl1^fl2 = e^(z2 log z1)
>
> 0^fl is 1 if z = 0, and 0 if fl is positive. Otherwise, the result may be +nan.0, or may be some meaningless flonum.

I take it z2 should be fl2 and z1 should be fl1, and z should be f1.

Does the "0" here mean "0.0"?   Does the "1" here refer to "1.0"?

(Similar questions arise for insqrt.)

Enough for now.

The paper

[Egner et al. 2004] Sebastian Egner, Richard Kelsey, Michael Sperber. Cleaning up the tower: Numbers in Scheme. In Proceedings of the Fifth ACM SIGPLAN Workshop on Scheme and Functional Programming, pages 109--120, September 22, 2004, Snowbird, Utah. Technical report TR600, Computer Science Department, Indiana University

is cited as justification for several parts of this proposal. Unfortunately, I believe this paper to be seriously flawed.

I'm bringing this up for the following reason: I don't believe that this paper can be used, by itself, as justification for any aspect of this proposal. It may be used as an *argument* for certain aspects of the proposal, but as there has been no general discussion of the merits of the paper, either in this forum or in others, I don't believe that it can be used as "settled decisions". Thus, if the authors of this SRFI wish to use arguments from that paper as justification for certain aspects of *this* proposal, they should repeat those arguments here for them to be discussed and debated as usual.

Brad