[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Exactness

This page is part of the web mail archives of SRFI 77 from before July 7th, 2015. The new archives for SRFI 77 contain all messages, not just those from before July 7th, 2015.




On Sat, 22 Oct 2005, Marcin 'Qrczak' Kowalczyk wrote:

>> At least, I can see no advantage to such a mandate.

> It standarizes the status quo, so programs which rely on inexact
> numbers being floats are not only working in practice but also
> formally portable.

Instead of saying "rely on inexact numbers being floats", can
you enumerate the qualities of floats that you find important?
Because I think the right thing is to discuss (and possibly
standardize on) the qualities, rather than a particular method
of achieving them. IEEE floats are just one way of achieving
those goals.

I can think of a number of different systems of implementing
numbers with several of the "important" qualities of floats,
including finite computing time and known-size representation,
that are likely to be better matches for some jobs than floats
are, in terms of ability to return correct (exact) results more
often and represent more of the most common computable numbers.

As a general principle, I think it's a bad idea for a program to
*rely* on the result of any operation on exact numbers (except for
exact->inexact) being inexact.  That includes trig, hyperbolic
functions, roots, etc.  Such programs are inherently, and quite
properly, nonportable, and it isn't the standard's fault, because
what they're doing in making that assumption is an error.

> The concept of floating point was designed a long time ago. It's not
> some fancy new idea which might become fashionable for a while and
> then go away. Hardware is converging to it, not diverging.

I think that the problem here is that a lot of people (myself
included) do *NOT* wish to discourage implementors from doing
better (producing exact results more often) than the floating-
point format would allow.

I think that having more exactly representable numbers is good,
so I might make a numeric format where each word gave a digit,
base 2677114440 (the most maximally-divisible composite under
2^32 of the first 9 primes).  Then all rational numbers whose
denominators were composites or products of the first 9 primes
(including all decimal fractions, since their denominators are
products of 2 and 5) would be directly representable in a few
words, without approximation.  The standard allows that.  I
think that's good.

If profiling showed that the vast majority of time spent on
calculation was going into float multiplication and division, and I
wanted more speed, I might implement a simple logarithmic numeric
type, so that all multiplication and division could be coded directly
as addition and subtraction (and exponentiation as multiplication,
etc), and have the system autoconvert numbers in loops where the
profiler and program code analysis said it would help a lot and the
conversion didn't violate the program's desired precision.  The
standard allows that too, and that is also good (although it's kind of
short on ways for the program to document its precision requirements).

				Bear






>I'm open to libraries which provide other kinds of inexact numbers.
>Just not make them the default, because program relying on them would
>be much less portable than programs relying on flonums.
>
>> I do not think that any particular combination should be mandated,
>> much less mandating all possible combinations. Those which do not
>> support them (or some combination of them) should be required to
>> document their behavior and behave sensibly when the cannot
>> implement something.
>
>So programs can only give hints, they can't rely on availability of
>any particular properties? Well, in this area I prefer programming
>based on confidence than hope.
>
>While I dislike lots of things about Java, one thing was successful:
>lots of core operations have well-defined meaning and work the same
>way everywhere.
>
>It's bad to standarize on something which is a temporary limitation of
>current computers and is likely to change in future. Floating point
>doesn't look like this.
>
>> Trig functions most certainly can be computed exactly if you have a
>> clever enough implementation.  You aren't thinking correctly here,
>> because you are wedded to implementation issues.  It is perfectly
>> possible to implement exacts reals, and make the trig functions
>> compute exact answers.  (Have you ever played with maxima?)
>
>Scheme is not a CAS. Fancy representations are OK as long as they are
>not the default. Programming should be predictable. Is there any
>existing Scheme implementation with exact irrational numbers?
>
>Anyway, you can ignore the "(e.g. trigonometric)" part of my question.
>The rest of the question stands.
>
>> Perhaps we could insist on neither, and give programmers the option of
>> specifying the precision, and then implementations which can comply
>> will do so, and the others will either signal an error or (at user
>> preference) do the best they can.
>
>Again programming based on hope...
>
>> How to print a number is an entirely separate question from its
>> representation.
>
>Agreed.
>
>> For example, on a system with exact reals, the system might know
>> that the value of some computation is 2ð; what should happen when
>> it's printed?
>
>Well, if it printed this as the result of (* 8 (atan 1)), then any
>other program reading its data would be surprised. I would prefer that
>a given Scheme implementation doesn't use non-standard output notation
>by default.
>
>>> 4. What should happen when the number being computed is getting too
>>>    large to be conveniently represented? Return "infinity"? Signal an
>>>    error? How an infinity is formatted into text?
>>
>> As Alan Watson pointed out, this is simply a normal case of memory
>> exhaustion and does not need to be discussed separately.
>
>It's a case of memory exhaustion only in the context of exact numbers.
>
>For inexact numbers you might argue that such interpretation is right
>too, but users will say that that it's broken and switch to others.
>
>>> 9. Should the implementation try to track inexactness of real and
>>>    imaginary part separately, or we don't care? If the imaginary part
>>>    comes very close to 0, should the result be indistinguishable from
>>>    a real number, or we care about being sure whether it's real or not?
>>
>> This is an ongoing difficulty for some people.  The answer is
>> certainly "no", it should not be tracked differently, but that's
>> because I think of complex numbers as numbers, not as pairs of
>> numbers.
>
>This philosophical standpoint doesn't explain the practical issue.
>It's a false dichotomy: I treat complex numbers as numbers which are
>isomorphic to pairs of real numbers. Both interpretations are true at
>the same time.
>
>Consider these numbers:
>   -5.0
>   -5.0+0i
>   -5.0-0i
>   -5.0+0.0i
>   -5.0-0.0i
>Which of them should be indistinguishable (eqv?)?
>
>FP experts would be upset if the last two were the same.
>(angle -5.0+0.0i) is pi, (angle -5.0-0.0i) is -pi.
>
>In my interpretation the first three are the same, other two are
>different from the first three and from each other.
>
>In the strict reading of R5RS all are eqv?, even though some of them
>can be distinguishable by arithmetic operations. This is bad, eqv?
>should not unify distinguishable values (this doesn't apply to
>distinguishing with eq?); the error is in the definition of eqv? in
>terms of = and exactness. It breaks also for 0.0 vs. -0.0 and
>3s0 vs. 3L0.
>
>>> 10. When we ask whether the number is even, and it happens to be
>>>    inexact, should the implementation try to answer hoping that
>>>    inexactness did not change the value, or we prefer an error to be
>>>    signalled?
>>
>> Perhaps we need two functions!
>
>I would be happy in making this an error. Currently it's required to
>answer if it looks like integer (inexactly). Even though whether
>it looks like integer depends on the inaccuracy introduced during
>computation.
>
>It's bad when the validity of a procedure call depends on floating
>point accuracy. That's why IEEE fp produces infinities and NaNs by
>default; they don't always result from arguments truly outside the
>domain, but from arguments which have slipped outside after rounding
>to fp precision.
>
>
>Aubrey Jaffer <agj@xxxxxxxxxxxx> writes:
>
>> Were the finiteness predicates left out of srfi-77 accidentally?
>
>I don't know. It's clear however that the semantics of R5RS predicates
>is unclear in the presence of special fp numbers. Are +inf.0 and
>+nan.0 real? Are they allowed as real and imaginary parts of complex
>numbers? Since the latter should be true (operations on complex number
>can easily produce them), I think the former should be true as well.
>They should not be rational? though.
>
>The questions about domains are intrinsically poorly defined for
>inexact numbers. That's why thinking in terms of representations
>is often more practical.
>
>> But mixed exactness spawns many senseless combinations. What does
>> 1.23+5i represent?
>
>What is senseless about it? It's a number whose real part is not known
>exactly (the computation has determined that it's about 1.23) but its
>imaginary part is 5 for sure.
>
>--
>   __("<         Marcin Kowalczyk
>   \__/       qrczak@xxxxxxxxxx
>    ^^     http://qrnik.knm.org.pl/~qrczak/
>
>