I have now a little bit improved the package of my polynomial and rational fractions utilities over GiNaC (0.6.4) It provides now gcd normalization, factor (square-free and rational roots), partfrac & integrate (assuming factor works on the denominator), for example: integrate '1/(x^4-1)^5' returns -1155/8192*log(1+x)-1155/4096*atan(x)+1155/8192*log(-1+x)+1/2048*(-893*x-1375*x^9+1755*x^5+385*x^13)*(-1+x^4)^(-4) # Time for (integrate)0.04 You can retrieve and test it at ftp://fourier.ujf-grenoble.fr/pub/hp48/giac.tgz Bye, Bernard Parisse
Hi Bernard, On Fri, 11 Aug 2000, Parisse Bernard wrote:
I have now a little bit improved the package of my polynomial and rational fractions utilities over GiNaC (0.6.4) It provides now gcd normalization, factor (square-free and rational roots), partfrac & integrate (assuming factor works on the denominator), for example: integrate '1/(x^4-1)^5' returns -1155/8192*log(1+x)-1155/4096*atan(x)+1155/8192*log(-1+x)+1/2048*(-893*x-1375*x^9+1755*x^5+385*x^13)*(-1+x^4)^(-4) # Time for (integrate)0.04
Thanks for your contribution. We are quite interested in this. Just a remark: you know about Victor Shoup's library NTL that does highly efficient factorization for univariate polynomials? It was recently relicensed under GPL and thus it is possible to merge code with GiNaC. It is available from <http://www.shoup.net/ntl/>. Your integrate is impressive but it depends on factor() to work on the denominator. This is IMHO not very satisfactory until factor is guaranteed to always work efficiently. (BTW: over what field? I hope no algebraic extensions are needed?) Are you familiar with Horowitz' Algorithm for integrating rational functions [1]? AFAICT it reduces the whole integration to linear algebra. This looks much more attractive to me since all the facilities needed should already be present in GiNaC. Regards -richy. [1] Ellis Horowitz: Algorithms for Partial Fraction Decomposition and Rational Function Integration. Proc. 2nd Symposium on Symbolic and Algebraic Manipulation, ACM Inc, 1971, pp. 441-457. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
"Richard B. Kreckel" a écrit :
Hi Bernard,
On Fri, 11 Aug 2000, Parisse Bernard wrote:
I have now a little bit improved the package of my polynomial and rational fractions utilities over GiNaC (0.6.4) It provides now gcd normalization, factor (square-free and rational roots), partfrac & integrate (assuming factor works on the denominator), for example: integrate '1/(x^4-1)^5' returns -1155/8192*log(1+x)-1155/4096*atan(x)+1155/8192*log(-1+x)+1/2048*(-893*x-1375*x^9+1755*x^5+385*x^13)*(-1+x^4)^(-4) # Time for (integrate)0.04
Thanks for your contribution. We are quite interested in this. Just a remark: you know about Victor Shoup's library NTL that does highly efficient factorization for univariate polynomials? It was recently relicensed under GPL and thus it is possible to merge code with GiNaC. It is available from <http://www.shoup.net/ntl/>.
Your integrate is impressive but it depends on factor() to work on the denominator. This is IMHO not very satisfactory until factor is guaranteed to always work efficiently. (BTW: over what field? I hope no algebraic extensions are needed?)
Of course, and I'm currently working on this. I did not know that NTL was GPL now. I will have a look. But it might be better to rewrite the remaining part of the code on my side since I do not want to have a huge library.
Are you familiar with Horowitz' Algorithm for integrating rational functions [1]? AFAICT it reduces the whole integration to linear algebra. This looks much more attractive to me since all the facilities needed should already be present in GiNaC.
Yes, but I prefer Hermite's method, I believe it "factorizes" in some sense the linear algebra operations of Horowitz' method.
On Sat, 19 Aug 2000, Parisse Bernard wrote: [...]
Yes, but I prefer Hermite's method, I believe it "factorizes" in some sense the linear algebra operations of Horowitz' method.
Sorry, I do not understand that statement. Could you elaborate a bit, please? Regards -richy. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
"Richard B. Kreckel" a écrit :
On Sat, 19 Aug 2000, Parisse Bernard wrote: [...]
Yes, but I prefer Hermite's method, I believe it "factorizes" in some sense the linear algebra operations of Horowitz' method.
Sorry, I do not understand that statement. Could you elaborate a bit, please?
I mean, instead of solving one large linear system, you make k (where k is the exponent of the denominator) small steps, each step using the *same* Bézout identity to reduce the exponent of the denominator by one. One big advantage is that you do not need to write the linear system. I don't know if one method is better than the other, I would bet it's the same complexity (constants might differ depending on implementations...)
Thanks for your contribution. We are quite interested in this. Just a remark: you know about Victor Shoup's library NTL that does highly efficient factorization for univariate polynomials? It was recently relicensed under GPL and thus it is possible to merge code with GiNaC. It is available from <http://www.shoup.net/ntl/>.
OK, I had a look now and there are major differences between NTL and the polynomial library I'm working on: 1/ NTL handles *univariate* polynomials, not multivariate polynomials. 2/ the representation of polynomial of NTL is *dense*, not sparse. Therefore I don't think that NTL would be well suited for Groebner basis calculations for example. On the other hand, it will be more efficient for univariate factorization than any code I could write with multivariate polynomials, but that should be only by a constant factor (for generic input; since I do not plan to add heuristics to the distinct degree factorization+ Cantor-Zassenhaus right now, some inputs might be much faster with NTL). Moreover, considering that the NTL compiled library is 2.6M, I prefer to finish my factorization code: one of my objective is to be able to run a free CAS on one of these linux-handheld PC one day... Anyway, thanks for the reference, it's always nice to have code and tests inputs, especially since MuPAD factorization code is not always well commented, not free, and they do not publish their tests as far as I know. Bernard
Hi, Parisse Bernard wrote:
faster with NTL). Moreover, considering that the NTL compiled library is 2.6M, I prefer to finish my factorization code: one of my objective is to be able to run a free CAS on one of these linux-handheld PC one day...
I am a bit curious as to how you plan to use GiNaC as a CAS for PDAs? GiNaC is meant to be used on a workstation with a c++ compiler, something I'm sure you will have a hard time getting to run on a PDA given the memory limits. Also, that would require a text editor etc. PDAs are notoriously hard to use for programming purposes, given they have a small keyboard (if they have one at all) and small screen. A completely different experience from working on a workstation, I can assure you. In fact, to make it useful on a PDA, the most important thing is actually a good user interface. And you will have a hard time designing a user interface that is different from the calculator metaphor. A calculator is not very suited for the more elaborate calculations, where you actually have to write a little program to solve the problem. I have a little experience trying this. I ported yacas (a GPL'ed CAS) over to the Psion series organisers. But it is too clumsy to use right now (console on a PDA ;-) ). The dream is of course to be able to do some math stuff while away from your computer (perhaps sitting outside in a park in the sun somewhere ;-) ). The experience disappointed me a bit. You need a good user interface first. Ayal
I am a bit curious as to how you plan to use GiNaC as a CAS for PDAs? GiNaC is meant to be used on a workstation with a c++ compiler, something I'm sure you will have a hard time getting to run on a PDA given the memory limits. Also, that would require a text editor etc. PDAs are notoriously hard to use for programming purposes, given they have a small keyboard (if they have one at all) and small screen. A completely different experience from working on a workstation, I can assure you. In fact, to make it useful on a PDA, the most important thing is actually a good user interface. And you will have a hard time designing a user interface that is different from the calculator metaphor. A calculator is not very suited for the more elaborate calculations, where you actually have to write a little program to solve the problem.
I disagree with this. I'm the main programmer of the HP49G and HP40G calculators CAS and I believe that a calculator is well suited for little programs, that's not much different from writing a small program in a CAS like Maple or MuPAD. The main problem with graphing calculators (especially the HP4xG) is the processor speed (about 1000* slower than my laptop), not the programming environment (e.g. you have a debugger, you can put breakpoints and exec your program step by step, and view the current stack). If we want to replace one day the proprietary CAS with GPL'ed CAS, we must start by providing a good solution for the educationnal market and the non-mathematician users.
I have a little experience trying this. I ported yacas (a GPL'ed CAS) over to the Psion series organisers. But it is too clumsy to use right now (console on a PDA ;-) ). The dream is of course to be able to do some math stuff while away from your computer (perhaps sitting outside in a park in the sun somewhere ;-) ). The experience disappointed me a bit. You need a good user interface first.
Well, if we provide a good library, I'm sure we will find hardware vendors ready to develop interfaces for the educationnal market. We don't need the interface first, we need to meet the specifications of the education market.
On Sat, 19 Aug 2000, Parisse Bernard <parisse@mozart.ujf-grenoble.fr> wrote:
On Sat, 19 Aug 2000, Ayal Pinkus <apinkus@xs4all.nl> wrote:
I am a bit curious as to how you plan to use GiNaC as a CAS for PDAs? GiNaC is meant to be used on a workstation with a c++ compiler, something I'm sure you will have a hard time getting to run on a PDA given the memory limits. Also, that would require a text editor etc. PDAs are notoriously hard to use for programming purposes, given they have a small keyboard (if they have one at all) and small screen. A completely different experience from working on a workstation, I can assure you. In fact, to make it useful on a PDA, the most important thing is actually a good user interface. And you will have a hard time designing a user interface that is different from the calculator metaphor. A calculator is not very suited for the more elaborate calculations, where you actually have to write a little program to solve the problem.
I disagree with this. I'm the main programmer of the HP49G and HP40G calculators CAS and I believe that a calculator is well suited for little programs, that's not much different from writing a small program in a CAS like Maple or MuPAD. The main problem with graphing calculators (especially the HP4xG) is the processor speed (about 1000* slower than my laptop), not the programming environment (e.g. you have a debugger, you can put breakpoints and exec your program step by step, and view the current stack). If we want to replace one day the proprietary CAS with GPL'ed CAS, we must start by providing a good solution for the educationnal market and the non-mathematician users.
Hmm, just don't know whether I'm kicking a dead horse here. It's an old thread. But really, I've been hacking on CLN/GiNaC support for ARM lately and they seems to fit well into quite old ARM computers. Many PDAs are running on such things. So, the portability is there and the machines are getting faster than what we see here: timing commutative expansion and substitution.... passed size: 25 50 100 200 time/s: 1.57 6.5 27.08 113.19 timing Laurent series expansion of Gamma function.... passed order: 10 15 20 25 time/s: 4.49 25.33 113.37 431.3 timing determinant of univariate symbolic Vandermonde matrices.... passed dim: 4x4 6x6 8x8 10x10 time/s: 0.13 1.26 13.63 125.07 timing determinant of polyvariate symbolic Toeplitz matrices.... passed dim: 5x5 6x6 7x7 8x8 time/s: 1.56 7.46 33.21 134.36 timing Lewis-Wester test A (divide factorials). passed 1.64s timing Lewis-Wester test B (sum of rational numbers). passed 0.28s timing Lewis-Wester test C (gcd of big integers). passed 1.56s timing Lewis-Wester test D (normalized sum of rational fcns). passed 30.24s timing Lewis-Wester test E (normalized sum of rational fcns). passed 26.44s timing Lewis-Wester test F (gcd of 2-var polys). passed 3.75s timing Lewis-Wester test G (gcd of 3-var polys). passed 81.24s timing Lewis-Wester test H (det of 80x80 Hilbert). passed 291.67s timing Lewis-Wester test I (invert rank 40 Hilbert). passed 104.21s timing Lewis-Wester test J (check rank 40 Hilbert). passed 68.17s timing Lewis-Wester test K (invert rank 70 Hilbert). passed 603.7s timing Lewis-Wester test L (check rank 70 Hilbert). passed 369.04s timing Lewis-Wester test M1 (26x26 sparse, det). passed 21.8s timing Lewis-Wester test M2 (101x101 sparse, det) disabled timing Lewis-Wester test N (poly at rational fcns) disabled timing Lewis-Wester test O1 (three 15x15 dets) disabled timing Lewis-Wester test P (det of sparse rank 101). passed 63.5s timing Lewis-Wester test P' (det of less sparse rank 101). passed 198.2s timing Lewis-Wester test Q (charpoly(P)) disabled timing Lewis-Wester test Q' (charpoly(P')) disabled timing computation of an antipode in Yukawa theory. passed 6261.38s Isn't it lovely? Almost 100 times slower than our workstations but still useful. Regards -richy. -- Richard Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
Hmm, just don't know whether I'm kicking a dead horse here. It's an old thread. But really, I've been hacking on CLN/GiNaC support for ARM lately and they seems to fit well into quite old ARM computers. Many PDAs are running on such things. So, the portability is there and the machines are getting faster than what we see here:
timings snipped
From my own testing with giac/xcas on skiffcluster5 (Compaq cluster), it seems that you can expect about the speed of a Pentium 60Mhz (well I hope the speed of skiffcluster5 is about the same as the speed of an iPAQ). The problem is more size (display size and flash ROM size) than speed. Some examples (without optimization flag for compilation)
factor PIII-500 skiffcluster5 HP49 x^10-1 too fast 0.1s 17.0s x^100-1 0.12s 1.82s not tested x^202+x^101+1 4.07s 52.07s
participants (4)
-
apinkus
-
Parisse Bernard
-
Richard B. Kreckel
-
Richard B. Kreckel