On 01/25/2018 11:10 PM, Richard B. Kreckel wrote:
Notice that the comment of matrix::gauss_elimination says that "the algorithm is ok for matrices with numeric coefficients but quite unsuited for symbolic matrices" and there are similar comments about Gauss elimination inside flags.h. Bareiss is the favored
But your point seems to be that this comment is incorrect in the first place, since for a certain level of sparseness, Gauss would perform better. See below...
Right, I don't trust that comment; it runs contrary to my experience.
I also notice another inconsistency: matrix::fraction_free_elimination is prepared to cope with non-normal zeros, the other algorithms are not. Maybe, that should be harmonized (by making the other algorithms normal their elements first).
This is probably because gauss_elimination was not widely used for symbolic matrices, and thus did not receive enough of hardening.
Sure, but doing Gauss elemination is gambling with luck: for dense non-numeric matrices, Gauss is much worse!
Do you have an idea how to improve the heuristics?
Yeah, my proposal is here: https://www.ginac.de/pipermail/ginac-list/2018-January/002174.html BTW, it was my impression that Gauss elimination is not *that* bad for dense matrices. I'm attaching a test case which constructs a dense matrix with some random-ish polynomials and solves it via both Gauss and Bareiss elimination methods. Here are some typical results: matrix size gauss time bareiss time gauss/bareiss 2x2 0.00416815 0.00361022 1.15454 3x3 0.0195643 0.0141638 1.38129 4x4 0.0896266 0.0666258 1.34522 5x5 0.205633 0.185446 1.10885 6x6 0.730309 0.979617 0.745505 7x7 1.29834 3.14298 0.413093 8x8 2.46307 9.70648 0.253755 9x9 4.4289 (segfaults) 10x10 8.39434 (segfaults) ... It's actually worse than I thought it was. I think I need to reconsider my proposal above; it seems that we need to use Gauss elimination for *all* the matrices.