Hi Vitaly, On 18.01.2018 15:10, Vitaly Magerya wrote:
Hi, folks. I've run into a number of problems with matrix solution routines in GiNaC, and I'd like to sort them out one by one.
The basic overview of the problems is this:
1) matrix::gauss_elimination checks for zero entries via a simple .is_zero() call without a .normal() call before, so it misses zeros in non-normal form. For example it will not detect that this expression:
1/(x - 1) + 1/(x + 1) - 2*x/(x*x - 1)
... is zero. This leads to "division by zero" errors.
Test case:
#include <ginac/ginac.h> using namespace GiNaC; int main() { symbol x("x"); matrix mx { {1,0,0}, {0,1/(x+1)-1/(x-1)+2/(x*x-1),1}, {0,1,2}, }; matrix zero(mx.rows(), 1); matrix tmp(mx.cols(), 1); for (unsigned i = 0; i < tmp.nops(); i++) { tmp.let_op(i) = symbol(); } mx.solve(tmp, zero, solve_algo::gauss); return 0; }
Notice that the comment of matrix::gauss_elimination says that "the algorithm is ok for matrices with numeric coefficients but quite unsuited for symbolic matrices" and there are similar comments about Gauss elimination inside flags.h. Bareiss is the favored But your point seems to be that this comment is incorrect in the first place, since for a certain level of sparseness, Gauss would perform better. See below... I also notice another inconsistency: matrix::fraction_free_elimination is prepared to cope with non-normal zeros, the other algorithms are not. Maybe, that should be harmonized (by making the other algorithms normal their elements first).
2) matrix matrix::solve does the same erorr, so if gauss_elimination returned a matrix with un-normal zeros, it too may error out with "division by zero".
Test case:
#include <ginac/ginac.h> using namespace GiNaC; int main() { symbol x("x"); matrix mx { {1,0,0}, {0,1/(x+1)-1/(x-1)+2/(x*x-1),1}, {0,0,0}, }; matrix zero(mx.rows(), 1); matrix tmp(mx.cols(), 1); for (unsigned i = 0; i < tmp.nops(); i++) { tmp.let_op(i) = symbol(); } mx.solve(tmp, zero, solve_algo::gauss); return 0; }
Same comment as above.
3) The default echelon form algorithm, Bareiss elimination, (matrix::fraction_free_elimination) is unbearably slow for sparse matrices, and should never be used with them. This is because of explosive common factor growth during its evaluation.
For an illustration, using matrix::fraction_free_elimination on a matrix like this:
{{a11,a12,a13,a14}, { 0,a22,a23,a24}, { 0, 0,a33,a34}, { 0, 0, 0,a44}}
... will give you a result like this:
{{a11, a12, a13, a14}, { 0,a11*a22, a11*a23, a11*a24}, { 0, 0,a11*a22*a33, a11*a22*a34}, { 0, 0, 0,a11*a22*a33*a44}}
Now imagine a similar 100x100 matrix with slightly more complex diagonal elements.
Sure, but doing Gauss elemination is gambling with luck: for dense non-numeric matrices, Gauss is much worse! Do you have an idea how to improve the heuristics?
Now, there are multiple ways of dealing with these problems. The practical workaround for the moment is to: a) use solve_algo::gauss for all sparse-ish matrices b) normal()ize the whole matrix before passing it into solve()
There are places where this workaround is not available. One of such places is matrix::inverse(): it doesn't provide an 'algo' argument, and always defaults to solve_algo::automatic (which means fraction_free_elimination). So for the start I propose to add that argument (see the attached patch). The reservation I have with this patch is other functions (solve() and determinant()) have their own enumerations for the 'algo' argument, while inverse() will be reusing the one from solve(). Not the cleanest thing in the world from API usage point of view. What do you think of it?
No objection to re-using solve_algo. All my best, -richy.