Dear Alejandro, On Thu, 27 Jul 2006, Alejandro Limache wrote:
The cases of inner products of physical quantities that I can recall always involve indices of the same dimension, but maybe in some general optimization or minimization problems one might have different dimensions.
It can quite easily get more hairy then that. In the standard model before symmetry breaking the left-handed quark field has four indices that all live in different spaces. In dimensional regularization there can be indices that live in the same space, yet have a different dimension. The latter case is the main reason that we really do care about the dimension.
1) to get some simplifications so as to have less terms to compute numerically
You can always use algebraic substitution: .subs( indexed(T, idx(wild(0),3), idx(wild(1),3)) * indexed(D, idx(wild(0),3), idx(wild(1),3)) == P, subs_options::subs_algebraic )
2) to expand tensor expressions (combinations of sums and products) in indicial form (as I think is done in some libraries that use expression templates like blitz++) in order to reduce the loss of performance of C++ with respect to fortran o C code.
Do you mean the writing out of contractions as in A.i*B.i -> A.1*B.1 + ... + A.3*B.3. That can be done using the function ex expand_dummy_sum(const ex & e, bool subs_idx).
3) to do some analytical differentiation of terms I think these types of things could make a code more general, powerful and efficient (faster).
Do you mean differentiation with respect to indexed objects? Yes, that certainly is very useful. It is not present in GiNaC at the moment. It is not so difficult, though, to write your own function to do this. Just write a function for it that uses a type switch. Below an example is given of some code that I once wrote for this purpose. It is probably more complicated then what you would need. Best wishes, Chris ex int_deriv(const ex&x,const ex&y) { if(is_a<indexed>(x)&&is_a<indexed>(y)&&x.nops()==y.nops()) { ex result=int_deriv(x.op(0),y.op(0)); if(result==0) return 0; for(int i=1;i<x.nops();++i) { if(is_a<varidx>(x.op(i))&&is_a<varidx>(y.op(i))) { ex yother=ex_to<varidx>(y.op(i)).toggle_variance(); result*=lorentz_g(x.op(i),yother); } else result*=delta_tensor(x.op(i),y.op(i)); } return result; } else if(is_a<function>(x)&&x.nops()==1) { ex deriv,repl; if(is_a<indexed>(y)) { deriv=y.op(0); repl=y.op(1); } else if(is_a<lst>(y)&&y.nops()==2) { deriv=y.op(1); repl=y.op(0); } else { deriv=y; repl=1; } if(is_a<function>(deriv)&&is_a<function>(x)) { unsigned ser=ex_to<function>(deriv).get_serial(); if(ser!=ex_to<function>(x).get_serial()) return 0; return x.subs(x==repl)*nullify(x.op(0)==deriv.op(0)); } } else if(is_a<mul>(x)||is_a<ncmul>(x)||is_a<lst>(x)|| x.match(inp(wild(0),wild(1)))) { ex result=0; for(int i=0;i<x.nops();++i) { ex factor=int_deriv(x.op(i),y); if(factor!=0) result+=x.subs(x.op(i)==factor).expand(); } return result; } else if(is_a<add>(x)) { exvector result; for(int i=0;i<x.nops();++i) { if(!(i%1000)&&x.nops()>1000) cerr << i << "/" << x.nops() << endl; result.push_back(int_deriv(x.op(i),y)); } return add(result); } return 0; }