Hello! On Sat, Jan 19, 2008 at 11:34:24PM +0100, Richard B. Kreckel wrote:
The second reason is that while performing the binary splitting, some intermediate integer results may become much larger than the result's precision warrants. As it turns out, that excess precision can simply be truncated by coercing the result into a cl_LF of appropriate length. Basically, this compresses the extra digits into the floating-point exponent.
Can you prove this won't result in an additional roundoff error?
With some rational series, the savings are dramatic. As an extreme example, attached is a picture of the memory footprint when computing one million decimal digits of Euler's gamma. The red curve corresponds to CLN-1.1.x while the blue one to CLN 1.2.0. Here, making the operands smaller even saves computing time.
What about the accuracy of the result? Best regards, Alexei -- All science is either physics or stamp collecting.