Hi, On Sun, 24 Mar 2002, Richard Fateman wrote:
For two large input polynomials this is clearly bogus, because it does not allow for early cancellations. It was done this way because the compexity of sorting is asymptotically ideal (n*log(n) comparisons). This seems to have been a short-sighted decision.
The sorting cost is going to be dominated by the coefficient operations most likely, even if it is n^2. Note that Maple doesn't bother to sort at all. I think that if you test the others against sorting of Maple's results, Maple will look far worse.
Sure Maple bothers to sort! The order may not be apparent, but they do sort their expressions. Hasn't that been the consensus when the discussion last came up on sci.math.symbolic? By the way: This observation of yours that Maple gets slower on the 2nd trial, isn't this exactly the problem Carl DeVore was describing in sci.math.symbolic recently? It seems to get even slower after more trials. Is this a new "feature" or do older releases of Maple also suffer from this? [...]
There are not too many ways of multiplying sparse polynomials without using u*v multiplies and about u*v -size(answer) adds.
I was referring more to implementation details than to fundamental algorithmic things. Hence the gazillions. :-)
For dense polynomials in a finite field, FFT can be used. My guess is that an FFT approach would not be faster on such a small ( :) ) problem as this one. But a careful implementation might win. In order to represent the inputs and the outputs, the FFT would probably have to be done quite a few times since the bignum coefficients would need to be built up by Chinese Remainder.
This is exactly what I was thinking when Dan Bernstein recommended the FFT-based method for univariate polynomials when I last met him. I don't know if anything came out of his Zmult project, though... [...]
Geez, and we thought polynomial expansion was trivial...
Incidentally, you should be able to do w^2 much faster than w*w.
Sure in the sense that (w*w).expand() is much faster than (w.expand()*w.expand()).expand(). Oh, and w*w equals pow(w,2) trivially due to evaluation.
PS: Richard: 1234567890 < (2^31)-1, on page 1, third paragraph.
Thanks. I changed it. That number is large enough so that in my Lisp it is not a single-word "fixnum" .. some bits are used for type tags. I don't know anything about GiNaC internals, but such design decisions involving a unified treatment of arbitrary size integers can affect time, programmer convenience, data size etc. in a very complicated way.
Very true. CLN does exactly this: it distinguishes between fast fixnums and slow bignums in a tranparent way using type tags. Must be because another Lisp-fan wrote it... :-) Regards -richy. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>