ginsh: > evalf((-1)^(1/3)); 0.5+0.86602540378443864673*I While this answer is technically correct: exp(I*pi/3) = -1 It's not what I was expecting, and the cause of some headache today. I was trying to write a little routine like RootOf to solve a cubic polynomial, and was getting three complex answers. (at least one of them must be real if you start with a real polynomial) How should I go about enforcing that (-1)^(1/3) = -1? I'm interested in polynomials over the reals, so the complex solutions don't interest me. This behaviour of evalf is preventing me from extracting the real solution. Here's my little function if anyone is interested: // Find the roots of a polynomial. // The polynomial can be at most 4th order. lst RootOf(const ex& poly, const ex& x) { ex p = poly; ex D,Q,R,S,T; p.expand(); ex a0 = poly.coeff(x,0); ex a1 = poly.coeff(x,1); ex a2 = poly.coeff(x,2); ex a3 = poly.coeff(x,3); ex a4 = poly.coeff(x,4); switch(p.degree(x)) { case 0: return lst(); case 1: return lst(-a0/a1); case 2: return lst((-a1 + sqrt(a1*a1-4*a2*a0))/(2*a2), (-a1 - sqrt(a1*a1-4*a2*a0))/(2*a2)); case 3: a0 /= a3; a1 /= a3; a2 /= a3; Q = (3*a1-a2*a2)/9; R = (9*a2*a1-27*a0-2*a2*a2*a2)/54; D = pow(Q,3) + pow(R,2); S = pow(R+sqrt(D),numeric(1,3)); T = pow(R-sqrt(D),numeric(1,3)); // These formulas come from: http://mathworld.wolfram.com/CubicEquation.html return lst( -numeric(1,3)*a2+S+T, -numeric(1,3)*a2-numeric(1,2)*(S+T)+numeric(1,2)*I*sqrt(ex(3))*(S-T), -numeric(1,3)*a2-numeric(1,2)*(S+T)-numeric(1,2)*I*sqrt(ex(3))*(S-T) ); //case 4: // http://mathworld.wolfram.com/QuarticEquation.html // It seems this is defined ambiguously. eq. 30 wants "a real root" but there // may be as many as 3! Can there 12 roots!??! default: throw(std::domain_error("RootOf: Cannot take the root of a polynomial with degree > 4")); } return lst(); } Cheers, -- Bob Bob McElrath (rsmcelrath@students.wisc.edu) Univ. of Wisconsin at Madison, Department of Physics
Bob McElrath wrote:
ginsh: > evalf((-1)^(1/3)); 0.5+0.86602540378443864673*I
While this answer is technically correct: exp(I*pi/3) = -1
It's not what I was expecting, and the cause of some headache today. I was trying to write a little routine like RootOf to solve a cubic polynomial, and was getting three complex answers. (at least one of them must be real if you start with a real polynomial)
How should I go about enforcing that (-1)^(1/3) = -1?
I think it would be unreasonable to require that (-1)^(1/3) = -1, since the other solutions are as good as -1. Thus I believe GiNaC does the right thing by leaving the symbolic expression (-1)^(1/3) untouched. On the other hand, a little experiment with ginsh shows that
1^(1/2); 1
so that here GiNaC does choose one root. I believed it does so on the grounds that even roots of non-negative real numbers are usually defined that way. However, still with ginsh, we get also
(-1)^(1/2); I
so it seems this tradition is pushed a little too far. I think these issues should be clarified with the GiNaC team. Perhaps you are criticizing the behavior of evalf() when applied to (-1)^(1/3) and I agree with you. Since odd roots of real numbers always have a real solution, why not defining evalf() so that it does "the Right Thing"? The point is perhaps: how much would it cost? GiNaC people? Please forgive me if I failed to understand your point.
I'm interested in polynomials over the reals, so the complex solutions don't interest me. This behaviour of evalf is preventing me from extracting the real solution. Here's my little function if anyone is interested:
// Find the roots of a polynomial. // The polynomial can be at most 4th order. lst RootOf(const ex& poly, const ex& x) { ex p = poly; ex D,Q,R,S,T; p.expand(); ex a0 = poly.coeff(x,0); ex a1 = poly.coeff(x,1); ex a2 = poly.coeff(x,2); ex a3 = poly.coeff(x,3); ex a4 = poly.coeff(x,4); switch(p.degree(x)) { case 0: return lst(); case 1: return lst(-a0/a1); case 2: return lst((-a1 + sqrt(a1*a1-4*a2*a0))/(2*a2), (-a1 - sqrt(a1*a1-4*a2*a0))/(2*a2)); case 3: a0 /= a3; a1 /= a3; a2 /= a3; Q = (3*a1-a2*a2)/9; R = (9*a2*a1-27*a0-2*a2*a2*a2)/54; D = pow(Q,3) + pow(R,2); S = pow(R+sqrt(D),numeric(1,3)); T = pow(R-sqrt(D),numeric(1,3)); // These formulas come from: http://mathworld.wolfram.com/CubicEquation.html return lst( -numeric(1,3)*a2+S+T, -numeric(1,3)*a2-numeric(1,2)*(S+T)+numeric(1,2)*I*sqrt(ex(3))*(S-T), -numeric(1,3)*a2-numeric(1,2)*(S+T)-numeric(1,2)*I*sqrt(ex(3))*(S-T) ); //case 4: // http://mathworld.wolfram.com/QuarticEquation.html // It seems this is defined ambiguously. eq. 30 wants "a real root" but there // may be as many as 3! Can there 12 roots!??! default: throw(std::domain_error("RootOf: Cannot take the root of a polynomial with degree > 4")); } return lst(); }
If that is your problem, I believe the piece of code below will solve it. Beware: it is extracted from a library we are writing so that it only received very little testing. I hope it helps. All the best Roberto -- Prof. Roberto Bagnara Computer Science Group Department of Mathematics, University of Parma, Italy http://www.cs.unipr.it/~bagnara/ mailto:bagnara@cs.unipr.it /* Definitions for the algebraic equation solver. Copyright (C) 2002 Roberto Bagnara <bagnara@cs.unipr.it> This file is part of the Parma University's Recurrence Relation Solver (PURRS). The PURRS is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. The PURRS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. For the most up-to-date information see the PURRS site: http://www.cs.unipr.it/purrs/ . */ #include <ginac/ginac.h> #include <cln/complex.h> typedef GiNaC::ex GExpr; typedef GiNaC::symbol GSymbol; typedef GiNaC::lst GList; typedef GiNaC::numeric GNumber; using namespace GiNaC; static GExpr cubic_root(const GExpr& e) { static GExpr one_third = GExpr(1)/3; return pow(e, one_third); } /*! Solve the equation \f$x^3 + a_1 x^2 + a_2 x + a_3 = 0\f$ and return <CODE>true</CODE> if and only if all the solutions are real. \f$x_1\f$ is guaranteed to be a real solution. */ bool solve_equation_3(const GNumber& a1, const GNumber& a2, const GNumber& a3, GExpr& x1, GExpr& x2, GExpr& x3) { GExpr Q = (3*a2 - pow(a1, 2)) / 9; GExpr R = (9*a1*a2 - 27*a3 -2*pow(a1, 3)) / 54; GExpr d = pow(Q, 3) + pow(R, 2); GExpr a1_div_3 = a1/3; if (d < 0) { GExpr sqrt_minus_Q = sqrt(-Q); GExpr theta = acos(-R/Q*sqrt_minus_Q); x1 = -a1_div_3 + 2*sqrt_minus_Q*cos(theta/3); x2 = -a1_div_3 + 2*sqrt_minus_Q*cos((theta+2*Pi)/3); x3 = -a1_div_3 + 2*sqrt_minus_Q*cos((theta+4*Pi)/3); } else { GExpr sqrt_d = sqrt(d); GExpr S = cubic_root(R + sqrt_d); GExpr T = cubic_root(R - sqrt_d); GExpr S_plus_T = S + T; GExpr t1 = -S_plus_T/2 - a1_div_3; GExpr t2 = (S - T)*I*sqrt(GExpr(3))/2; x1 = S_plus_T - a1_div_3; x2 = t1 + t2; x3 = t1 - t2; } // The roots are all real if and only if d <= 0. return d <= 0; }
Hello there, On Fri, 18 Jan 2002, Roberto Bagnara wrote:
Bob McElrath wrote:
ginsh: > evalf((-1)^(1/3)); 0.5+0.86602540378443864673*I
While this answer is technically correct: exp(I*pi/3) = -1
It's not what I was expecting, and the cause of some headache today. I was trying to write a little routine like RootOf to solve a cubic polynomial, and was getting three complex answers. (at least one of them must be real if you start with a real polynomial)
How should I go about enforcing that (-1)^(1/3) = -1?
I think it would be unreasonable to require that (-1)^(1/3) = -1, since the other solutions are as good as -1. Thus I believe GiNaC does the right thing by leaving the symbolic expression (-1)^(1/3) untouched. On the other hand, a little experiment with ginsh shows that
1^(1/2); 1
so that here GiNaC does choose one root. I believed it does so on the grounds that even roots of non-negative real numbers are usually defined that way. However, still with ginsh, we get also
(-1)^(1/2); I
so it seems this tradition is pushed a little too far. I think these issues should be clarified with the GiNaC team.
Perhaps you are criticizing the behavior of evalf() when applied to (-1)^(1/3) and I agree with you. Since odd roots of real numbers always have a real solution, why not defining evalf() so that it does "the Right Thing"? The point is perhaps: how much would it cost? GiNaC people?
There are obviously a lot of choices that can be made in this field and they were the cause of much agony. The question is, what should be the guidelines for these choices? Instead of choosing `visual simplicity', which could be interpreted as a vote for (-1)^(1/3) => -1, we chose to accept `branch cut consistency', which is more rigorously definable. The roots (even or odd) of negative numbers are always the first ones counting counter-clock-wise from the positive real axis. Why is this? There are several guidelines. The usual convention is to define the n'th root of x==r*exp(I*p), r and p being real, like such: x^(1/n) = exp(log(x)/n) == exp((log(r) + I*p)/n) == exp(log(r)/n) + exp(I*p/n) The definition of the principal branch of the log function is such that the branch cut runs along the negative real axis, continuous with quadrant II, i.e. log(-1) == +I*Pi. Since the exp function does not feature a branch cut, this defines the branch cut of all the roots as well. This definition is arbitrary from a mathematical point of few. However, for practical purposes, we need to fix the branch cuts somehow and we did not come up with our own definitions. For instance, the Common Lisp standard mandates them. I can warmly recommend Guy L. Steele's book "Common Lisp, the Language, 2nd Edition", which we should think of as the de-facto standard for Lisp. Section 12.5 specifies these things and is an excellent treatment. Just search Google for it, there are numerous copies of it on the web. (But skip the parts dealing with signed zeros, because we don't have these.) Why, could one ask next, should we bother about Lisp? Because they did dare to specify these things and people have been following their recommendation. I checked a lot of critical cases and found that both MapleV and Mathematica follow the same convention with respect to branch cuts. Also, we studied the ISO C++ standard and while this standard unfortunately leaves some room for interpretation there is hope in sight: The revised C standard (C99) specifies complex numbers and the choice of all branch cuts is exactly identical to the one in Common Lisp. (One just has to read carefully and find the amusing section 7.3.3.2 saying "implementations shall map a cut so the function is continuous as the cut is approached coming around the finite endpoint of the cut in a counter clockwise direction.") The above consistency between Lisp, C, Maple and Mathematica (and GiNaC) also holds true for all inverse hyperbolic and inverse trigonometric functions like tanh, sinh, atanh, etc. Unfortunately, other Computer Algebra Systems (notably MuPAD and Reduce) chose their own, mutually incompatible, definitions for branch cuts. If somebody finds a discrepancy, then we should fix it -- I am not guaranteeing that we got every case correct (but we've tried quite hard). I am only saying that there are some external constraints and thinking that the choice is entirely ours would be foolish. Well, I hope this explains your observation with regards to evalf(). Oh, and (-1)^(1/2) gets eval'ed while (-1)^(1/3) does not because the first one has an exact solution while the latter does not. That is all behind it. Hope this helps -richy. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
Roberto Bagnara [bagnara@cs.unipr.it] wrote:
Bob McElrath wrote:
How should I go about enforcing that (-1)^(1/3) = -1?
I think it would be unreasonable to require that (-1)^(1/3) = -1, since the other solutions are as good as -1. [...] Perhaps you are criticizing the behavior of evalf() when applied to (-1)^(1/3) and I agree with you. Since odd roots of real numbers always have a real solution, why not defining evalf() so that it does "the Right Thing"?
No, their choice is a reasonable one, but it makes wrong answers for this problem.
I'm interested in polynomials over the reals, so the complex solutions don't interest me. This behaviour of evalf is preventing me from extracting the real solution. Here's my little function if anyone is interested: [...] If that is your problem, I believe the piece of code below will solve it. Beware: it is extracted from a library we are writing so that it only received very little testing. I hope it helps.
Thank you for your code, it is useful but I want to solve a more general problem. I want to find the roots of a cubic equation _algebraically_. That is, I can't check if the discriminant is positive or negative because there may be undetermined constants involved. Then, given the _algebraic_ solution(s) to the cubic equation, I should be able to plug it in to the original polynomial and get zero. This behavior of evalf is giving me cube-roots which are all complex and none are actually roots. Richard B. Kreckel [kreckel@zino.physik.uni-mainz.de] wrote:
Hello there, [...] There are obviously a lot of choices that can be made in this field and they were the cause of much agony. The question is, what should be the guidelines for these choices? Instead of choosing `visual simplicity', which could be interpreted as a vote for (-1)^(1/3) => -1, we chose to accept `branch cut consistency', which is more rigorously definable.
I am not questioning the choice of taking the roots of unity as you have. I was surprised, but I checked and Maple does the same thing. Rather I just want to find the roots of a cubic equation, and have those roots be correct. ;) Examining the way Maple does it...it always seems to return to you something where the argument of the cube root is entirely positive. However when I ask it for the solution with undetermined coefficents:
myroots := solve(x^3+a2*x^2+a1*x-a0,x); 1/3 1/3 1/2 1/3 myroots := 1/6 %1 - 6 %2 - 1/3 a2, - 1/12 %1 + 3 %2 - 1/3 a2 + 1/2 I 3 (1/6 %1 + 6 %2),
1/3 1/2 1/3 - 1/12 %1 + 3 %2 - 1/3 a2 - 1/2 I 3 (1/6 %1 + 6 %2) 3 3 2 2 2 3 1/2 %1 := 36 a1 a2 + 108 a0 - 8 a2 + 12 (12 a1 - 3 a1 a2 + 54 a1 a2 a0 + 81 a0 - 12 a0 a2 ) 2 1/3 a1 - 1/9 a2 %2 := ---------------- 1/3 %1 Looking at %1 there, it is the argument to the cube root. If it is negative that should demonstrate the roots of unity problem. The following choice is sufficent:
discr := 36*a1*a2+108*a0-8*a2^3+12*sqrt(12*a1^3-3*a1^2*a2^2+54*a1*a2*a0+81*a0^2-12*a0*a2^3); 3 3 2 2 2 3 1/2 discr := 36 a1 a2 + 108 a0 - 8 a2 + 12 (12 a1 - 3 a1 a2 + 54 a1 a2 a0 + 81 a0 - 12 a0 a2 ) evalf(subs(a2=11,a1=5,a0=7,discr)); -7912. + 3691.243693 I
However, if we look at the root itself:
evalf(subs({a2=11,a1=5,a0=7},myroots[1])); .590815406
Therefore, Maple is not applying the branch cut definition of roots of unity to the roots of this polynomial. I wish to do a similar thing. I think this should be possible. Any suggestions on how to go about doing it? I was thinking along the lines of Maple's assume() functionality. Assuming assume() were introduced to ginac, what assumptions would you place on the subexpressions in the cube root to make the answers come out correctly? Could the power class be modified to include information about which root-of-unity is the apropriate one? How does that affect nonrational exponents? Cheers, -- Bob Bob McElrath (rsmcelrath@students.wisc.edu) Univ. of Wisconsin at Madison, Department of Physics
Why don't you define a simple function like: numeric qrt(numeric n){ if (n>=0) return pow(n,numeric(1)/numeric(3)); else return -pow(-n,numeric(1)/numeric(3)); } and use that in your particular problem. Cheers, H.P.
Hans Peter Würmli [wurmli@freesurf.ch] wrote:
Why don't you define a simple function like:
numeric qrt(numeric n){ if (n>=0) return pow(n,numeric(1)/numeric(3)); else return -pow(-n,numeric(1)/numeric(3)); }
and use that in your particular problem.
Because it's the symbolic expression I'm interested in, and I can't test if n >= 0. I suppose I could make a: class nthroot : public function { public: nthroot(numeric n, ex arg); // ... and override the evalf function for it (I'm not sure how to do this). But I was hoping this could be incorporated into pow() since the problem crops up for any pow() with a fractional exponent. Cheers, -- Bob Bob McElrath (rsmcelrath@students.wisc.edu) Univ. of Wisconsin at Madison, Department of Physics
Dear Bob I have written a lst RootOf(const lst & l) returning a list of the three zeros plus the lead coefficient program for degree 3 polynomials, where the input list contains the polynomial and the indeterminate. In the sample output of the main program you will see the effect of precision with Digits=yyy that "zero" is represented by some long digit sequence times E-xxx that you felt some time ago was a bug of GiNaC, but isn't. I don't know how one can force GiNaC to simplify expressions. Say, I used the programm to find symbolically the three roots x1, x2, x3 and reassembled the polynomonial by expanding leadcoefficient*(x-x1)*(x-x2)*(x-x3). To take an example: p(x) = 1 + x + x^3 Reassembling p from the roots and expanding produces: p(x) = 1/2+x+x^3+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93) which is correct, if one simplifies 1/2+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93) to 1 Best regards Hans Peter // // Roots of third degree polynomial // (from Kochendörfer "Einführung in die Algebra" // #include <iostream> #include <ginac/ginac.h> using namespace GiNaC; // // lst contains the polynomial and the indeterminate // lst RootOf(const lst& l){ // // some checks - throwing an error would be appropriate // if (l.nops()!=2) return lst(); ex poly=l.op(0); // // Cint cannot resolve the next line // if (!is_a<symbol>(l.op(1))) return lst(); // // my Cint complains if I use the name "x" again, therefore "z" // ex z=l.op(1); if (poly.degree(z)!=3) return lst(); ex a0=poly.coeff(z,0), a1=poly.coeff(z,1), a2=poly.coeff(z,2),a3=poly.coeff(z,3); symbol t("t"); // // normalisation to get the form x^3+a1*x+a0 // ex save=a2/(a3*3), lead=a3; ex pol= (poly/a3).subs(z==t-a2/(a3*3)).expand(); // // polynomial is now p(t)=t^3+p*t+q // ex q=pol.coeff(t,0), p=pol.coeff(t,1); // // rho^3=1, crho^3=1, the two non-trivial roots of x^3-1 // ex rho=(-1+pow(3,numeric(1,2))*I) /2, crho=(-1-pow(3,numeric(1,2))*I)/2; if (p.is_zero()) return lst((-pow(q,1/numeric(3))-save).expand(), (-rho*pow(q,1/numeric(3))-save).expand(), (-crho*pow(q,1/numeric(3))-save).expand(), lead); // // the discriminant // ex D= -numeric(4)*pow(p,3)-numeric(27)*pow(q,2); // Lagrange resolvents ex lrho=pow(-numeric(27,2)*q+numeric(3,2)*pow(-3*D,numeric(1,2)),1/numeric(3)); ex lcrho=-3*p/lrho; ex th=numeric(1,3); return lst(th*(lrho+lcrho)-save, th*(crho*lrho+rho*lcrho)-save, th*(rho*lrho+crho*lcrho)-save, lead); } // //Some examples // symbol y("y"), x("x"), a0("a0"),a1("a1"),a2("a2"),a3("a3"); ex poly=1+x+pow(x,3); ex poly1=a0+a1*x+a2*pow(x,2)+a3*pow(x,3); ex poly2=27+pow(x,3); lst l(poly,x), l2(poly2,x),l1(poly1,x); void main(void){ cout << RootOf(l) << "\n\n\n"; cout << RootOf(l1) << "\n\n\n"; cout << RootOf(l2) << "\n\n\n"; ex test=RootOf(l2).op(3)*(x-RootOf(l2).op(0))*(x-RootOf(l2).op(1))*(x-RootOf(l2).op(2)); cout << l2.op(0) << " = " << test.expand() << "\n\n\n"; ex test1=RootOf(l).op(3)*(x-RootOf(l).op(0))*(x-RootOf(l).op(1))*(x-RootOf(l).op(2)); cout << l.op(0) << " = " << test1.expand() << "\n\n\n"; Digits=50; cout << evalf(poly.subs(x==RootOf(l).op(0))) << "\n\n\n"; cout << evalf(poly.subs(x==RootOf(l).op(1))) << "\n\n\n"; cout << evalf(poly.subs(x==RootOf(l).op(2))) << "\n\n\n"; cout << evalf(poly2.subs(x==RootOf(l2).op(0))) << "\n\n\n"; cout << evalf(poly2.subs(x==RootOf(l2).op(1))) << "\n\n\n"; cout << evalf(poly2.subs(x==RootOf(l2).op(2))) << "\n\n\n"; cout << "This list should be empty = " << RootOf(lst(pow(x,3),x+y)) << "\n\n\n"; }
Hi, On Sun, 20 Jan 2002, Hans Peter Würmli wrote:
I have written a
lst RootOf(const lst & l) returning a list of the three zeros plus the lead coefficient
program for degree 3 polynomials, where the input list contains the polynomial and the indeterminate. In the sample output of the main program you will see the effect of precision with Digits=yyy that "zero" is represented by some long digit sequence times E-xxx that you felt some time ago was a bug of GiNaC, but isn't.
I don't know how one can force GiNaC to simplify expressions. Say, I used the programm to find symbolically the three roots x1, x2, x3 and reassembled the polynomonial by expanding leadcoefficient*(x-x1)*(x-x2)*(x-x3). To take an example:
p(x) = 1 + x + x^3
Reassembling p from the roots and expanding produces:
p(x) = 1/2+x+x^3+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93)
which is correct, if one simplifies 1/2+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93) to 1
This is already way cool, isn't it? But, to "simplify" is not well-defined. Such computations should not be done automatically at the level of the anonymous avaluator (i.e. classwhatever::eval()). This does not mean that it doesn't fit into the library, per se. In Maple, you may have noticed that `simplify' gets the job done, but what really is being called is the function `radsimp'. We simply haven't looked at this yet because we don't need it in our computations so I see little hope that Cebix or me is going to do this. If somebody else wants to, patches that implement such a method/function would be welcome. The general cases can AFAIK not be algorithmically handled. However, if you are only interested in square roots and the degree of nesting of radicals is not larger than two, there is a wealth of literature. The book by James Davenport, Yvon Siret and Evelyne Tournier "Systems and Algorithms for algebraic Computation" talks about this, IIRC (don't have the copy here). H. Cohen's "A Course in Computational Algebraic Number Theory" is another must. Two classic papers are R. Zippel's "Simplification of expressions involving radicals", in J. Symb. Comput. 1:189-210, 1985 and "Decreasing the nesting depth of expressions involving square roots" by J. Hopcroft, A. Borodina, R. Fagin and M. Tompa in J. Symb. Comput. 1:169-188, 1985. There might also be some problems of representation involved because not all solutions of polynomials are expressible as radicals. Just consider the classic x^5-x+1==0. One could still symbolically represent the roots inside a new class called `root' and even do interesting things to them, like root(x^5-x+1)^5 => root(x^5-x+1)-1. But this is now getting quite far... Regards -richy. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
On Sun, 20 Jan 2002, I wrote:
On Sun, 20 Jan 2002, Hans Peter Würmli wrote: [...]
which is correct, if one simplifies 1/2+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93) to 1
This is already way cool, isn't it? But, to "simplify" is not well-defined. Such computations should not be done automatically at the level of the anonymous avaluator (i.e. classwhatever::eval()).
[...] Oops, I just realized that this particular case above is even more trivial and what I wrote doesn't apply to it yet. Simply calling normal() will get the job done, since there is only one sqrt() in there. Regards -richy. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
On Sun, 20 Jan 2002 15:38:14 +0100 (CET) "Richard B. Kreckel" <kreckel@thep.physik.uni-mainz.de> wrote: To: ginac-list@thep.physik.uni-mainz.de Subject: Re: Roots of unity
[...]
which is correct, if one simplifies 1/2+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93) to 1
This is already way cool, isn't it? But, to "simplify" is not well-defined. Such computations should not be done automatically at the level of the anonymous avaluator (i.e. classwhatever::eval()).
[...]
Oops, I just realized that this particular case above is even more trivial and what I wrote doesn't apply to it yet. Simply calling normal() will get the job done, since there is only one sqrt() in there.
I had tried with normal(), but now I realise that it makes a difference whether I use sqrt(2) or pow(2, numeric(1,2)). If I take the same expression, but the numeric symbol replaced by, say, x, the same simplification does not take place. I of course fully agree with you that "simplification" is not well defined terminology. On the other hand, you could still try to represent, say, 1/x^(1/2) as x^(1/2)/x i.e. having all radicals of symbols in the numerator. Provided a didn't make a mistake, the above "simplification" can be accomplished generally. Take N(x^(1/n)) and D(x^(1/n)) two polynomials, regarding x^(1/n) as a symbol (where n is the lcm of the denominators of all exponents). If I can accomplish a full partial fraction representation of N/D, then I can use (x^(1/n) - a) * (x^((n-1)/n)/a^(n-1) + x^((n-2)/n)/a^(n-2) + ... + 1) = x/a^(n-1) -a If I look at the algorithms for symbolic integration, I have at least the hope that one could do a lot without having to find all roots of the denominator. By the way: thank you for the references. If I have time (which is doubtful) I will consult them. I tried the little program for the roots of a third order polynomial rather to get used to GiNaC and to learn how one would, if one wanted, extend GiNaC to have algebraic structures and operations on them. Best regards Hans Peter
Richard B. Kreckel [kreckel@zino.physik.uni-mainz.de] wrote:
Hi,
On Sun, 20 Jan 2002, Hans Peter Würmli wrote:
leadcoefficient*(x-x1)*(x-x2)*(x-x3). To take an example:
p(x) = 1 + x + x^3
Reassembling p from the roots and expanding produces:
p(x) = 1/2+x+x^3+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93)
which is correct, if one simplifies 1/2+(-27/2+3/2*sqrt(93))^(-1)-1/18*sqrt(93) to 1
This is already way cool, isn't it? But, to "simplify" is not well-defined. Such computations should not be done automatically at the level of the anonymous avaluator (i.e. classwhatever::eval()).
This does not mean that it doesn't fit into the library, per se. In Maple, you may have noticed that `simplify' gets the job done, but what really is being called is the function `radsimp'. We simply haven't looked at this yet because we don't need it in our computations so I see little hope that Cebix or me is going to do this. If somebody else wants to, patches that implement such a method/function would be welcome.
What I see as needed in order to write some simplification routines are methods for the ex class like there are in the numeric class: is_real, is_integer, operator<, etc... And an assume() functionality to create symbols that are reals, integers, etc. So that someone who wants to write some kind of simplification routine can do so. I was surprised to find that an expression involving entirely numeric's cannot be compared with relational operators (because, in general, an ex cannot be compared with relational operators). That is: > is(-1/2-sqrt(2/9)<0); 0 This is something that should be straightforwardly determinable, if you had an is_real and operator< for the ex class. Then if the above expression appears as a subexpression in something larger, you can make simplifications appropriate for reals. The same kind of comparisons could be done for expressions involving symbols, similar to Maple's assume() functionality: symbol x("x"); assume(x, real); // prototype: void assume(ex& x, info_flags fl = -1); assume(x>0); if(-x^2-x-10 < 0) { cout << "polynomial is negative" << endl; } I agree with you in general on not having automatic simplifications done for eval()...But something like the above would help people write custom simplifications. I personally would like a library of fine-grained "simplification" routines to rearrange expressions according to fine-grained rules. (i.e. put radicals in the numerator, combine logs, etc...) The other thing that is needed is for normal() to be able to apply enough transformations to tell that your simplified expression is the same as the original. Already I have run into expressions that after some transformations, they cannot be compared with the original expression. As you mention this may not be possible. But Maple seems to do it somehow (I often use if(simplify(EX-newEX)=0)... as a check -- it puts things in some kind of canonical form). Cheers, -- Bob Bob McElrath (rsmcelrath@students.wisc.edu) Univ. of Wisconsin at Madison, Department of Physics
On Sun, 20 Jan 2002, Bob McElrath wrote: [...]
What I see as needed in order to write some simplification routines are methods for the ex class like there are in the numeric class: is_real, is_integer, operator<, etc... And an assume() functionality to create symbols that are reals, integers, etc.
The reason this has not been done is that we have absolutely no idea how to make it consistent.
So that someone who wants to write some kind of simplification routine can do so. I was surprised to find that an expression involving entirely numeric's cannot be compared with relational operators (because, in general, an ex cannot be compared with relational operators). That is: > is(-1/2-sqrt(2/9)<0); 0 This is something that should be straightforwardly determinable, if you had an is_real and operator< for the ex class.
The sign of -1/2-sqrt(2/9) should be straightforwardly determinable. But what keeps you then from trying to determine the sign of nested roots? I believe that I read somewhere that algorithms which determine the sign of a general expression with n roots take time exponential in n, although sums seem to be easier. This is still an active field of research. Regards -richy. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
Richard B. Kreckel [kreckel@zino.physik.uni-mainz.de] wrote:
On Sun, 20 Jan 2002, Bob McElrath wrote: [...]
What I see as needed in order to write some simplification routines are methods for the ex class like there are in the numeric class: is_real, is_integer, operator<, etc... And an assume() functionality to create symbols that are reals, integers, etc.
The reason this has not been done is that we have absolutely no idea how to make it consistent.
Really? Maybe I haven't thought about it as much as you... But it seems you'd need methods: is_real is_negative etc... for every function (sqrt, sum, sin, pow, ...), which could then query their arguments in the appropriate manner to determine the final result.
The sign of -1/2-sqrt(2/9) should be straightforwardly determinable. But what keeps you then from trying to determine the sign of nested roots? I believe that I read somewhere that algorithms which determine the sign of a general expression with n roots take time exponential in n, although sums seem to be easier. This is still an active field of research.
The above naive algorithm looks O(n) in the number of functions to me. Maybe I'm missing something... By nested root you mean something like: sqrt(a+sqrt(b)) right? Cheers, -- Bob Bob McElrath (rsmcelrath@students.wisc.edu) Univ. of Wisconsin at Madison, Department of Physics
Hi, On Sun, 20 Jan 2002, Bob McElrath wrote:
[...]
What I see as needed in order to write some simplification routines are methods for the ex class like there are in the numeric class: is_real, is_integer, operator<, etc... And an assume() functionality to create symbols that are reals, integers, etc.
The reason this has not been done is that we have absolutely no idea how to make it consistent.
Really? Maybe I haven't thought about it as much as you...
But it seems you'd need methods: is_real is_negative etc... for every function (sqrt, sum, sin, pow, ...), which could then query their arguments in the appropriate manner to determine the final result.
The trouble starts already with trivial expressions: If a>0 and b>0, then (a+b)>0. But (a-b) would have to return false, as would (b-a). This makes it already difficult for sums. Ternary logic would help a bit, here. :-) [...]
By nested root you mean something like: sqrt(a+sqrt(b)) right?
Yes. Regards -richy. -- Richard Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
Richard B. Kreckel [kreckel@zino.physik.uni-mainz.de] wrote:
The trouble starts already with trivial expressions: If a>0 and b>0, then (a+b)>0. But (a-b) would have to return false, as would (b-a). This makes it already difficult for sums. Ternary logic would help a bit, here. :-)
You wouldn't need it to be accurate 100% of the time. Like some other functions in the library, is_negative should be true if "it can be determined that..." and false should mean either false or it cannot be determined. In using this for simplifications and rearrangements, it only means that a particular simplification wouldn't be applied. I wonder if "cheating" and using evalf would get you into trouble? i.e. evalf(a-b) < 0 Cheers, -- Bob Bob McElrath (rsmcelrath@students.wisc.edu) Univ. of Wisconsin at Madison, Department of Physics
On Sun, 20 Jan 2002, Bob McElrath wrote:
You wouldn't need it to be accurate 100% of the time. Like some other functions in the library, is_negative should be true if "it can be determined that..." and false should mean either false or it cannot be determined.
In using this for simplifications and rearrangements, it only means that a particular simplification wouldn't be applied.
Right. The logic "boolean true is a guarantee, boolean false may mean unable to deduce" is already implicitly anchored in a lot of places since that seems to be the only sensible way to tackle such things. As you noticed, the infrastructure for this is already partly in place. What you would need to do is attach some assumptions to symbols and then try to make such information propagate through arbitrary expressions. Right now they are all assumed to stand for any complex number. Knowing that symbol `a' is positive would help other kinds of computation, for instance series(log(a*x),x==0,2); need not be inert any more. If I am not mistaken log_evalf() woulnd't even have to be touched. Care to suggest a catalog of useful properties along with an analysis how orthogonal they are and provide an implementation?
I wonder if "cheating" and using evalf would get you into trouble? i.e. evalf(a-b) < 0
No, this is not a programmatic way to analyze such things. Even if implemented properly it will bring you into rounding hell. Regards -richy. -- Richard B. Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>
Richard B. Kreckel [kreckel@zino.physik.uni-mainz.de] wrote:
Right. The logic "boolean true is a guarantee, boolean false may mean unable to deduce" is already implicitly anchored in a lot of places since that seems to be the only sensible way to tackle such things.
As you noticed, the infrastructure for this is already partly in place. What you would need to do is attach some assumptions to symbols and then try to make such information propagate through arbitrary expressions. Right now they are all assumed to stand for any complex number.
Knowing that symbol `a' is positive would help other kinds of computation, for instance series(log(a*x),x==0,2); need not be inert any more. If I am not mistaken log_evalf() woulnd't even have to be touched.
Care to suggest a catalog of useful properties along with an analysis how orthogonal they are and provide an implementation?
I'll think about it. This is going to have to become a "spare-time" project. :( I had hoped to use GiNaC for some of my physics analysis, but for the project I'm working on now it's just too much work. (Maple is easier) Maybe for the next one. How is xloops coming? ;) The elements of class info_flags in flags.h would be a good starting point (the ones answered by class numeric). Though "nonpositive" seems to be missing. Tests involving relational operators could subtract the lhs and rhs and use one of {positive, negative, ...}. Zero will be very hard to test for unless .normal() becomes much more powerful. Maple has a number of others properties: RealRange(a,b) imaginary irrational composite (non-prime integer) and a lot of assumptions for matrices and functionals. Maple also has references in the "assume" help page: ;) ``An Algebra of Properties'' by Trudy Weibel and Gaston Gonnet, Proceedings of the ISSAC-91 conference, Bonn July 1991, pp 352-359 ``An assume facility for CAS with a sample implementation for Maple'' by Trudy Weibel and Gaston Gonnet, Conference presentation at DISCO '92, Bath, England, April 14, 1992. ``Simplification and the Assume Facility'', by Robert Corless and Michael Monagan, Maple Technical Newsletter, Volume 1, Number 1, Birkhaeuser, 1994.
I wonder if "cheating" and using evalf would get you into trouble? i.e. evalf(a-b) < 0
No, this is not a programmatic way to analyze such things. Even if implemented properly it will bring you into rounding hell.
Yes, true. But we have an artibrary-precision arithmetic engine handy... If you could algorithmically determine how many digits would be required to accurately represent each subexpression... It still seems like cheating. I'm sure there's many subtleties I don't see right now. Cheers, -- Bob Bob McElrath (rsmcelrath@students.wisc.edu) Univ. of Wisconsin at Madison, Department of Physics
Bob, I was a bit too fast before. This function is better: numeric qrt ( numeric x ){ if (x.is_real()) return x.csgn()*pow(abs(x),1/numeric(3)); else return pow(x,1/numeric(3)); }; Regards, H.P. On Fri, 18 Jan 2002 10:25:37 -0600 Bob McElrath <mcelrath@draal.physics.wisc.edu> wrote:
participants (4)
-
Bob McElrath
-
Hans Peter W�rmli
-
Richard B. Kreckel
-
Roberto Bagnara