Dear Alexei, Alexei Sheplyakov wrote:
The second reason is that while performing the binary splitting, some intermediate integer results may become much larger than the result's precision warrants. As it turns out, that excess precision can simply be truncated by coercing the result into a cl_LF of appropriate length. Basically, this compresses the extra digits into the floating-point exponent.
Can you prove this won't result in an additional roundoff error?
No. This is a good point and I've been worrying about it, too. Certainly, chopping off precision early may poison the final result – although not too much, because it only occurs on the top few levels of a binary tree! Still, for this reason I've not enabled the truncation by default in all binary splitting routines. It is optional and enabled only when used in certain rational series (Pi, Euler's constant, Catalan's constant, ...) where I paid special attention to the result's accuracy case by case.
With some rational series, the savings are dramatic. As an extreme example, attached is a picture of the memory footprint when computing one million decimal digits of Euler's gamma. The red curve corresponds to CLN-1.1.x while the blue one to CLN 1.2.0. Here, making the operands smaller even saves computing time.
What about the accuracy of the result?
As I mentioned: I've tried thousands of operand sizes from thousands to billions of decimal digits and never seen a difference. It would've been easy to compensate by introducing additional guard digits for instance. But it turned out to be completely unnecessary. What is better: proof or demonstration? :-) Well, I know it may not be completely satisfactory. If someone has a mathematical comprehension of why this is so, please do let me know. Cheers -richy. -- Richard B. Kreckel <http://www.ginac.de/~kreckel/>