Hi Bernard, I just put this on the devel-list, maybe Christian or Alex have comments on the topic (or someone else):
On Fri, 27 Oct 2000, Parisse Bernard wrote: [ ...talking about converting the expairseq-representation of add and mul to a sparse tensor representation by pulling out all of the numbers... ] I don't think there are efficiency problems there, maybe we loose a small constant factor but that's not where time is critical. What do you think of the idea of adding member functions to provide read-only access to .rest and .coeff? That would keep my code more readable and keep member protection.
On Fri, 27 Oct 2000, Richard Kreckel wrote: Grudgingly so. We might wish to change the internal representation of add and mul, although this doesn't seem very likely at the moment. The .op() access method is the preferred one. I think that it can be done more elegantly than what I did (and also more correctly).
On Sat, 28 Oct 2000, Parisse Bernard wrote: I'm afraid that using e.g. is_exactly_of_type(obj.op(obj.nops()),numeric) instead of obj.overall_coeff would be as well broken if you change the internal representation of add. I don't think that internal representation can be fully hidden when converting representations to another one. That's not a big issue as long as you keep all the dependant code well isolated and documented. And I believe that the most important thing for the future is code maintenance ease where readability is a key feature.
On Sat, 28 Oct 2000, Richard Kreckel wrote: Not really. We want to have it this way: The last one (obj.nops()-1 that is) is *supposed* to be the numeric overall coefficient, if it is different from the default coefficient, 0 for add, 1 for mul. It is just as in Maple, but we don't have a head part, so the counting is different. But the last one returned is always the numeric part. That's a kind of contract and we are in some other pieces of code really relying on it. I guess I should add checking this in the exam part...
On Sun, 29 Oct 2000, Parisse Bernard wrote: OK, but I'm not a Maple expert. In fact, I want to use C++ because there are some aspects of CAS langages that I do not like at all (and of course I don't want to use proprietary software anymore) like those you mention in your article (e.g. using list of lists to represent complex data structures). As I said before I find more natural to learn the internal structure of symbolic objects than to learn conventions that are not intrinsic. At first, it seems a better idea to have such sort of conventions and allow internal representation changes, but I don't believe you can keep this working. Algorithms, at least if you need efficient ones (and that's always the case for CAS algorithms), depends too much of the data structure. I've read in Stroustrup's book a lot of recommendations about good programming that have always exceptions when you deal with maths.
This would still leave us with the question how to access expairseq::seq.coeff and expairseq::overall_coeff from outside. Is the foo.op(foo.nops()-1) approach something one should live with or should we accept that others expect the representation as it is and add inspector methods? I would really appreciate some input on this. [ ...talking about the merits of add and mul's representation... ]
I did not know about such a representation before reading GiNaC doc and source. I have experience with the HP calculator system that represent symbolic objects as programs to be evaluated using a stack (for exemple '1+x' as SYMBOL 1 identifier_X + ;). This kind of representation is fairly easy to evaluate but more difficult to split into subtrees because embedded algebraics were not allowed. And it does accept the + as a binary operator only. Which makes rather difficult to code some operations, like a+d+b+c -> a+b+c+d. Moreover there are a lot of operations allowed inside symbolics: /, INV, SQRT, SQ, ... so that early simplifications like 1/INV(x) -> x are much harder to code than using your representation. Hence I adopted your representation because it solves some of these issues. I don't think you will find the need to change internal representation of these fundemental objects. The main problem I see currently is that you can not bind objects to a symbol (I would like to do that and I would like to be able to make assumptions on symbols as well). And I have to get a better understanding of your coding of operations like SIN, ... There is another problem I see with using CLN to compute e.g. transcendental functions if special values are found, e.g. ATAN(1), because symbolic values belong to GiNaC, not to CLN (or maybe I've missed something).
Ah, yes. We cannot really do much here. Automatically, all that is allowed is a simple evaluation if one thinks the resulting expression is "simpler". (In the case of atan(1) not doing so is clearly a bug and I'll fix it right now.) Following common jargon (eg R.J. Fateman in his article "Symbolic Mathematics System Evaluators" in Michael Westers "CAS - A practical guide") we can call this process of what is done automatically the "evaluator". We thought a useful convention is that automatically only fast transformations are done: on container-objects consisting of n things, this includes everything that goes like n*log(n) in complexity. Of course CLN does not have such a thing like Pi/4 because Pi doesn't fit into one of the algebraic domains there. So GiNaC simply tries to trap those arguments and give you a closed result in terms of well-known constants, including non-nested surds. Regards -richy. -- Richard Kreckel <Richard.Kreckel@Uni-Mainz.DE> <http://wwwthep.physik.uni-mainz.de/~kreckel/>