[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

This page is part of the web mail archives of SRFI 70 from before July 7th, 2015. The new archives for SRFI 70 contain all messages, not just those from before July 7th, 2015.

*To*: bear <bear@xxxxxxxxx>*Subject*: Re: inexactness vs. exactness*From*: Paul Schlie <schlie@xxxxxxxxxxx>*Date*: Mon, 08 Aug 2005 15:27:47 -0400*Cc*: Aubrey Jaffer <agj@xxxxxxxxxxxx>, <will@xxxxxxxxxxx>, <srfi-70@xxxxxxxxxxxxxxxxx>*Delivered-to*: srfi-70@xxxxxxxxxxxxxxxxx*In-reply-to*: <Pine.LNX.4.58.0508072301310.19258@xxxxxxxxxxxxxx>*User-agent*: Microsoft-Entourage/11.1.0.040913

> From: bear <bear@xxxxxxxxx> > On Sun, 7 Aug 2005, Paul Schlie wrote: >> I pre-apologize if this is a dumb question, but as it seems that >> exact values are only interesting for integer ratios, which >> correspondingly seem reasonable to expect to have some practical >> limited representational precision, therefore also imply the >> necessity to depict an over/underflowed value (i.e. +/- infinity and >> reciprocals); why is it perceived as necessary and/or appropriate to >> presume it's the responsibility of the programmer to limit computed >> values to reasonable precision rational values as opposed to >> possibly more simply defining that exacts are merely only more exact >> than inexacts to some definable precision? As beyond academic >> definition of exact, pretending that an arbitrary exact >> implementation supports infinitely exact computations seems both >> naive and impractical? > > Well, exact answers are just that -- exact. That's all it means, > really; the computer is telling you whether this number is *exactly* > right, not making any statements about how it's represented. If it > wasn't able to produce an exact answer due to the numeric > representation being limited to some particular precision where the > exact answer requires more precision to express, then usually I want > at least an inexact answer. > > It is not unreasonable for an implementation to limit the dynamic > range or precision of its numbers. If the implementation has, then > rather than return an exact number outside the dynamic range, we must > either report a violation of an implementation restriction, or return > an inexact number within the dynamic range. If the implementation has > not imposed such a limit, then inevitably the hardware does anyway. > In that case we must either return an inexact number, or report an out > of memory error. > > Typically the dynamic range or precision of inexact numbers is much > less than that of exact numbers. But how much less and in what ways > is an interesting question in dialect design. My own opinion is that > for consistent mathematics, exact and inexact numbers should have > similar dynamic ranges - meaning more limited "exact" representations > than most implementors use and greatly extended "inexact" > representations of about the same size as the "exact" representations. > But most people feel that "inexact" should be a codeword for > "Hardware-supported floating-point format," and for some calculations > and applications, that's actually better. > > There's a lot of room for interpretation in this part of the standard. > There's also a lot of room for extensions and variations in behavior. > It's hard to say what is "The Right Thing," really - and it frequently > depends on your particular application. - Thanks, I guess my point/question was predominantly related to the observation that there seems often little need for truly "exact" values beyond theoretical geometry and/or combinatorial mathematics, which often themselves only require a determinable finite precision; while simultaneously observing there's often broader need for more precise potentially "inexact" values than typically supported by double precision floating point implementations; so it just seemed that in practice that it may be more useful to define that "exact" values need only be as precise as necessary to support the exact representation of the integer values bounded by the dynamic range of the implementation's "inexact" implementation, and their corresponding reciprocal values in practice (as you've implied); thereby both providing a likely reasonably efficient "inexact" (aka double) and a likely reasonably precise corresponding "exact" representation, thereby seemingly satisfying a large majority of both theoretical and computational mathematical application requirements, without needing to place the burden on the programmer to manually explicitly limit the precision of "exact" values which exceed the typical precision of an "inexact" representation in order to prevent the potentially catastrophic consequences of specifying a "exact" calculation which may either exhaust the practical resources available to the program, or potentially equally catastrophic consequences of limiting the precision of a calculation to the likely substantially less precise "inexact" floating point implementation. (as then those few applications which may need greater precision than warranted by an "exact" implementation, may implement it as an application specific library package on top of the implementation's native "exact" implementation, as just a though?)

**Follow-Ups**:**Re: inexactness vs. exactness***From:*bear

**References**:**Re: inexactness vs. exactness***From:*bear

- Prev by Date:
**Really bignums** - Next by Date:
**Re: inexactness vs. exactness** - Previous by thread:
**Re: inexactness vs. exactness** - Next by thread:
**Re: inexactness vs. exactness** - Index(es):