Hi, folks. I've run into a number of problems with matrix solution
routines in GiNaC, and I'd like to sort them out one by one.
The basic overview of the problems is this:
1) matrix::gauss_elimination checks for zero entries via a
simple .is_zero() call without a .normal() call before, so it
misses zeros in non-normal form. For example it will not detect
that this expression:
1/(x - 1) + 1/(x + 1) - 2*x/(x*x - 1)
... is zero. This leads to "division by zero" errors.
Test case:
#include <ginac/ginac.h>
using namespace GiNaC;
int main() {
symbol x("x");
matrix mx {
{1,0,0},
{0,1/(x+1)-1/(x-1)+2/(x*x-1),1},
{0,1,2},
};
matrix zero(mx.rows(), 1);
matrix tmp(mx.cols(), 1);
for (unsigned i = 0; i < tmp.nops(); i++) {
tmp.let_op(i) = symbol();
}
mx.solve(tmp, zero, solve_algo::gauss);
return 0;
}
2) matrix matrix::solve does the same erorr, so if gauss_elimination
returned a matrix with un-normal zeros, it too may error out
with "division by zero".
Test case:
#include <ginac/ginac.h>
using namespace GiNaC;
int main() {
symbol x("x");
matrix mx {
{1,0,0},
{0,1/(x+1)-1/(x-1)+2/(x*x-1),1},
{0,0,0},
};
matrix zero(mx.rows(), 1);
matrix tmp(mx.cols(), 1);
for (unsigned i = 0; i < tmp.nops(); i++) {
tmp.let_op(i) = symbol();
}
mx.solve(tmp, zero, solve_algo::gauss);
return 0;
}
3) The default echelon form algorithm, Bareiss elimination,
(matrix::fraction_free_elimination) is unbearably slow for sparse
matrices, and should never be used with them. This is because
of explosive common factor growth during its evaluation.
For an illustration, using matrix::fraction_free_elimination
on a matrix like this:
{{a11,a12,a13,a14},
{ 0,a22,a23,a24},
{ 0, 0,a33,a34},
{ 0, 0, 0,a44}}
... will give you a result like this:
{{a11, a12, a13, a14},
{ 0,a11*a22, a11*a23, a11*a24},
{ 0, 0,a11*a22*a33, a11*a22*a34},
{ 0, 0, 0,a11*a22*a33*a44}}
Now imagine a similar 100x100 matrix with slightly more complex
diagonal elements.
Now, there are multiple ways of dealing with these problems. The
practical workaround for the moment is to:
a) use solve_algo::gauss for all sparse-ish matrices
b) normal()ize the whole matrix before passing it into solve()
There are places where this workaround is not available. One of
such places is matrix::inverse(): it doesn't provide an 'algo'
argument, and always defaults to solve_algo::automatic (which
means fraction_free_elimination). So for the start I propose to
add that argument (see the attached patch). The reservation I have
with this patch is other functions (solve() and determinant())
have their own enumerations for the 'algo' argument, while
inverse() will be reusing the one from solve(). Not the cleanest
thing in the world from API usage point of view. What do you
think of it?