[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Arithmetic issues

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.





On Mon, 17 Oct 2005, Michael Sperber wrote:


> Now, the Issues section in the SRFI is pretty long.  We were hoping
> to get some feedback on where people stand on these issues, so it'd
> be great if you could see it as some kind of questionnaire and just
> fire off your position on the issues where you have one.  You don't
> have to bother with a rationale.  (But of course rationales are
> always appreciated.)

Okay.  now I can do the rest...

> If R6RS does not adopt a R5RS-style model for the generic
> arithmetic, should it still provide more R5RS-compatible generic
> arithmetic as a library?

I wouldn't have a problem with that, but I guess the questions are how
much to provide natively and how much to leave to libraries, and how
to treat or signal errors.  Those are tough questions.

If we intend for scheme systems to maintain the ability to be small
and simple, then basic fixnum/flonum operations seem like the right
thing to leave in the core language.  But that will give errors
(either signalled errors, or "returned errors" like infinities, or
"silent" errors like wraparounds) for a lot of operations that are no
longer errors when the various math libraries (cf. bignums,
extended-floats) get loaded.  How exactly should we treat those errors
in the base language?

Similarly, a series expansion in the base language that uses flonums
could easily give an "out-of-memory" error or make the system
cripplingly slow when math libraries (cf. infinite-precision
rationals) are loaded.

> The external representations of 0.0, -0.0, infinities and NaNs must
> be specified. The notations used here are used by several other
> languages, and have been adopted by several implementations of
> Scheme, but other notations are possible

It is more important that R6RS pick something than it is what exactly
they pick.  Seriously, just make a decision that's not grotesquely
verbose (nothing over, say, ten characters long) and doesn't get
confused with an identifier, and I won't object.

> The fixnum, flonum, and inexact arithmetic come with a full deck of
> operations, including some that are defined in terms of integers
> (such as quotient+remainder, gcd and lcm), and others that are
> easily abused (such as fxabs). Should these be pruned?

No need for that, but they should be placed in libraries as extensions.

> The R5RS says this:
> Rationale: Magnitude is the same as abs for a real argument, but abs
> must be present in all implementations, whereas magnitude need only
> be present in implementations that support general complex numbers.

> Given that this SRFI suggests requiring all implementations to
> support the general complex numbers, should abs (and exabs and
> inabs) be removed?

I don't think that is reasonable.  I'd move them (and magnitude) to
libraries, but not remove them.

> The real?, rational?, and integer? predicates must return false for
> complex numbers with an imaginary part of inexact zero, as
> non-realness is now contagious. This causes possibly unexpected
> behavior: `(zero? 0+0.0i)' returns true despite `(integer? 0+0.0i)'
> returning false. Possibly, new predicates realistic?,
> rationalistic?, and integral? should be added to say that a number
> can be coerced to the specified type (and back) without loss. (See
> the Design Rationale.)

If something can be coerced to real, rational, or integer without loss
of information, I think those predicates should return true.  A
complex number with an inexact part is simply an inexact number, and
if its imaginary part is zero, can be coerced to an inexact number of
some other type without loss of information.

> Most Scheme implementations represent an inexact complex number as a
> pair of two inexact reals, representing the real and imaginary parts
> of the number, respectively. Should R6RS mandate the presence of
> such a representation (while allowing additional alternative
> representations), thus allowing it to more meaningfully discuss
> semantic issues such as branch cuts?

I think branch cuts can be meaningfully discussed without mandating
the presence of a cartesian representation.  Or, to the point
considering the most common alternative, polar representation would
facilitate the discussion of branch cuts just as much.  So I don't
agree with the proposed rationale for mandating the presence of a
cartesian representation.

That's not to say it's a bad idea; but if we want a solid rationale
for cartesian as opposed to polar representation we should be talking
to a number theorist about the propagation of errors through
complex-number calculations and see if there's a reason to believe
that cartesian (or polar) representation generally gives more accurate
results.

> The x|53 default for the mantissa width discriminates against
> implementations that default to unusually good representations, such
> as IEEE extended precision. Are there any such implementations? Do
> we expect such implementations in the near future?

I think that it is *always* a bad idea to standardize in a way that
discriminates against the best thing you can imagine an implementor
doing.  I recall that Chicken and a few other schemata can be compiled
with support for extended floating-point formats for people who want,
eg, 512-bit mantissas and 128-bit exponents.  And I think that's a
good thing, and the standard certainly should not forbid or
discriminate against it.

> Should `(floor +inf.0)' return +inf.0 or signal an error instead?
> (Similarly for ceiling, flfloor, infloor, etc.)

Hmmm.  Intuitively, I'd make an analogy from the result of 'floor' to
the result of the subtraction of some fraction less than one.  If we
don't return an error from the latter, we shouldn't return an error
from the former.  So I'd treat infinities as an identity element for
floor, ceiling, round, truncate, etc.  You would probably want to
signal an error if some function tries to coerce an infinity to an
exact number, though, and this usually happens with the results of
these operations.

> The bitwise operations operate on exact integers only. Should they
> live in the section on exact arithmetic? Should they carry ex
> prefixes? Or should they be extended to work on inexact integers as
> well?

I would say that having them operate on exact integers in the first
place is questionable; These are operations on bit vectors, not
operations on numbers, and their semantics require information (the
vector length) which is not expressed by the numbers.  To say that
they are defined on numbers is to confuse the number with a particular
representation.

I'd put them in a separate library for bitfield operations, and If
possible I'd have them operate on bit vectors, datums distinct from
integers entirely (but with their own external syntax and easy
conversion).  If that's too radical, I'd at least require someone to
specify the bit vector length s/he intends to use when that library of
bit vector operations on "integers" is loaded.

				Bear