On Mon, Dec 31, 2007 at 09:14:27AM +0000, alexander baker wrote:
Alexei,
Could you provide an example to highlight the following statement?
However, for arbitrary precision calculation the accuracy can be arbitrary high, so the interval could be arbitrary big.
Consider \int_1^\intfty dx/x^2. Of course, this integral is elementary (it's equal to 1), but for sake of example let's compute it with some numeric quadrature method using IEEE double precision floating point (FP) numbers. There are two kinds of errors: the errors of the numerical quadrature algorithm itself (cutoffs to replace improper integral with Riemann one, which in turn gets replaced by a finite sum, etc) and the roundoff errors. The accuracy of calculation can not be better than roundoff error. In this example that means the error is always bigger (or equal) than some \epsilon, being the smallest FP number such that 1 + \epsilon \ne 1. Thus, it's fine to cut off the upper limit of integration at 1/\epsilon (\int_{1/\epsilon}^\infty = \epsilon), since its contribution will be lost due to roundoff errors anyway. And any request to calculate the integral with higher accuracy is just bogus. Now, that \epsilon is basically a hardware- (platform-) dependent constant for IEEE doubles (and floats). Thus, the maximal cutoff does not change at the runtime. This is not the case for arbitrary-precision calculation, since the size of the mantissa (and hence, \epsilon) of the FP numbers is (almost) arbitrary user-specified quantity. Now, the arbitrary high target accuracy *does* make sense, so, the interval of integration can be arbitrary big. Hope that helps, Alexei -- All science is either physics or stamp collecting.