Dear All, Please find attached an arbitrary precision implementation of AiryAi, AiryBi and their derivatives. The implementation is based on the following paper: A. Gil, J. Segura, N.M. Temme, ``Computing complex Airy functions by numerical quadrature'', Report MAS-R0117, 2001. which is also attached. This work was conducted as part of the European FP6 project; "Photonic Integrated devices in Activated Amorphous and Crystalline Oxides" (PI-Oxide) http://pi-oxide.el.utwente.nl/ that granted permission to publish the code in the public domain. We would be grateful if you would consider the code for inclusion in a future release. Best regards Jan Bos PhoeniX Phone: +31 53 4836460 PO Box 545, 7500 AM Enschede, The Netherlands
Hello! On Sun, Dec 16, 2007 at 03:15:20PM +0100, Jan Bos wrote:
Please find attached an arbitrary precision implementation of AiryAi, AiryBi and their derivatives.
First of all, thanks a lot for your contribution! I've got some questions and remarks. First, I doubt the numerical integration is a good method for arbitrary-precision calculation. Although the paper claims the method can be used to compute Airy functions "to any desired accuracy... by modifying the step size", the actual analysis of *roundoff* errors is absent (I'll re-read the paper once more, may be I've missed it). I'd expect smaller step sizes to increase roundoff errors in uncontrollable manner. Something like (asymptotic) series expansion would be a better method. That said, it's just my gut feeling, I'm not really an expert. [skipped]
static ex AiryAi_evalf(const ex& z)
Few minor coding style related comments: - please avoid using caps in function names - FunctionAndVariableNamesLikeThisOneReallyHurtEyesAndAreNotReallyWellcome
if (is_a<numeric>(z)) { numeric zn = ex_to<numeric>(z); // Ai(z) -> conjugate(Ai(conjugate(z))) for Im(z) < 0 if (imag(zn).is_negative()) { numeric ret = ex_to<numeric>( AiryAi(conjugate(zn)).evalf()); return conjugate(ret); }
// for ph(z) in <2Pi/3,Pi] use // [1] 10.4.7 Ai(z) = -( zp*Ai(zp*z) + zm*Ai(zm*z) ), // zm = exp(-2/3*Pi*I), zp = exp(2/3*Pi*I) so // zm*z and conjugate(zp*z) are in [0,2Pi/3] if (!AiryInPrincipleSector(zn)) { numeric zm=ex_to<numeric>(AiryZm.evalf()); numeric zp=ex_to<numeric>(AiryZp.evalf()); return (-(zm*AiryAi(zm*zn)+zp*AiryAi(zp*zn))).evalf(); }
Using GiNaC::numeric for numeric calculations incurs significant overhead (yes I know the irony). In fact, GiNaC::numeric is not even intended for numeric intensive code. I suggest to use proper CLN types (i.e. cl_N for complex floating point numbers) instead (that is, if the numerical integration method is really suitable for arbitrary-precision calculation).
// ph(z) in [0,2Pi/3] perform contour integration AiryContour* pcntr=createAiryContour(zn);
Pointers are evil. What is the reason for explicit allocation/deallocation?
template <typename FUNC> numeric ModifiedTrapezoidalRule (const FUNC& f, numeric h, numeric d)
GiNaC already has numerical integration routine. Why re-invent the wheel? @ developers: BTW, the `integral' class sounds like re-inventing the wheel too. There are a *lot* of good numerical libraries out of there, and seamless integration with existing C/C++ code is one of the main reasons why GiNaC exists.
numeric answ=f(d);
// small number used to prevent div by 0 numeric TINY=ex_to<numeric>(pow(ex(10),ex(-(Digits+10))));
Please don't paper over the errors. Either handle singular points properly, or throw an exception if you can't (or don't want) to do it.
// approximation for smallest number so that 1+tol > 1 numeric tol=ex_to<numeric>(3*pow(ex(10),ex(-(Digits-2))));
Not really.
numeric j0=1;
You'd better use appropriate integer type for counters. Incrementing it is a single CPU instruction. Incrementing GiNaC::numeric is *much* more expensive. Best regards, Alexei -- All science is either physics or stamp collecting.
Alexei Sheplyakov wrote:
// approximation for smallest number so that 1+tol > 1 numeric tol=ex_to<numeric>(3*pow(ex(10),ex(-(Digits-2))));
Not really.
Let me just add that plain CLN does come with a better function for that. From the manual: `cl_F float_epsilon (float_format_t f)' Returns the smallest floating point number e > 0 such that 1+e != 1. -richy. -- Richard B. Kreckel <http://www.ginac.de/~kreckel/>
Alexei, Could you provide an example to highlight the following statement?
However, for arbitrary precision calculation the accuracy can be arbitrary high, so the interval could be arbitrary big.
Regards, Alex Baker On 29/12/2007, Richard B. Kreckel <kreckel@ginac.de> wrote:
Alexei Sheplyakov wrote:
// approximation for smallest number so that 1+tol > 1 numeric tol=ex_to<numeric>(3*pow(ex(10),ex(-(Digits-2))));
Not really.
Let me just add that plain CLN does come with a better function for that. From the manual: `cl_F float_epsilon (float_format_t f)' Returns the smallest floating point number e > 0 such that 1+e != 1.
-richy. -- Richard B. Kreckel <http://www.ginac.de/~kreckel/> _______________________________________________ GiNaC-list mailing list GiNaC-list@ginac.de https://www.cebix.net/mailman/listinfo/ginac-list
-- Mobile: 07788 872118 Blog: www.alexfb.com
On Mon, Dec 31, 2007 at 09:14:27AM +0000, alexander baker wrote:
Alexei,
Could you provide an example to highlight the following statement?
However, for arbitrary precision calculation the accuracy can be arbitrary high, so the interval could be arbitrary big.
Consider \int_1^\intfty dx/x^2. Of course, this integral is elementary (it's equal to 1), but for sake of example let's compute it with some numeric quadrature method using IEEE double precision floating point (FP) numbers. There are two kinds of errors: the errors of the numerical quadrature algorithm itself (cutoffs to replace improper integral with Riemann one, which in turn gets replaced by a finite sum, etc) and the roundoff errors. The accuracy of calculation can not be better than roundoff error. In this example that means the error is always bigger (or equal) than some \epsilon, being the smallest FP number such that 1 + \epsilon \ne 1. Thus, it's fine to cut off the upper limit of integration at 1/\epsilon (\int_{1/\epsilon}^\infty = \epsilon), since its contribution will be lost due to roundoff errors anyway. And any request to calculate the integral with higher accuracy is just bogus. Now, that \epsilon is basically a hardware- (platform-) dependent constant for IEEE doubles (and floats). Thus, the maximal cutoff does not change at the runtime. This is not the case for arbitrary-precision calculation, since the size of the mantissa (and hence, \epsilon) of the FP numbers is (almost) arbitrary user-specified quantity. Now, the arbitrary high target accuracy *does* make sense, so, the interval of integration can be arbitrary big. Hope that helps, Alexei -- All science is either physics or stamp collecting.
participants (4)
-
alexander baker
-
Alexei Sheplyakov
-
Jan Bos
-
Richard B. Kreckel