Please pull from git://ffmssmsc.jinr.ru:443/varg/ginac.git master
Dear Jens, Could you please pull from git://ffmssmsc.jinr.ru:443/varg/ginac.git master The following changes since commit 4ee761760b3db8649b8b616256cd7466fe2cd033: Jens Vollinga (1): Fixed various bugs in multivariate factorization. are available: Alexei Sheplyakov (21): expairseq::match(): remove the code which works around basic::match bug. match() (find()): use exmap (exset) to store matched subexpressions. G_numeric: put convergence/acceleration transofrmations into helper functions. G_numeric: use cl_N and int to manipulate numbers (instead of ex). [nitpick] don't use int instead of std::size_t. [nitpick] inifcns_nstdsums: don't use int instead of std::size_t. [nitpick] power::expand_add(): don't use int instead of std::size_t. [BUGFIX] parser.hpp: fix include guard so the header is actually usable. parser: map input strings onto arbitrary expressions (not only symbols). parser: allow read/write access to symbol table and strictness. parser: change order of the constructor optional arguments. ginac.h: include parser.hpp [bugfix]: parser::parse_unary_expr() parses '-a-b' correctly now. [bugfix] parser::parse_literal_expr(): don't forget to consume the token... parser: add necessary checks to operator() to stop accepting nonsense. Parser can parse (some) floating point numbers now. Use the new parser in the ex(const string&, lst&) ctor. Document the new parser, provide an example. check: time_parser.cpp: don't run the same benchmark twice. Wipe out the old (bison/flex generated) parser. Implemented modular GCD algorithm for univariate polynomials. check/Makefile.am | 19 ++- check/error_report.hpp | 17 ++ check/exam_cra.cpp | 120 ++++++++++ check/exam_mod_gcd.cpp | 85 +++++++ check/match_bug.cpp | 66 +++++ check/parser_bugs.cpp | 94 ++++++++ check/time_parser.cpp | 40 +--- doc/examples/Makefile.am | 2 +- doc/examples/derivative.cpp | 30 +++ doc/examples/ginac-examples.texi | 7 + doc/tutorial/ginac.texi | 102 ++++++-- ginac/Makefile.am | 25 ++- ginac/basic.cpp | 23 +- ginac/basic.h | 2 +- ginac/ex.cpp | 20 +-- ginac/ex.h | 8 +- ginac/expairseq.cpp | 20 +-- ginac/expairseq.h | 2 +- ginac/ginac.h | 6 + ginac/indexed.cpp | 4 +- ginac/inifcns_nstdsums.cpp | 488 ++++++++++++++++++++----------------- ginac/input_lexer.h | 69 ------ ginac/input_lexer.ll | 211 ---------------- ginac/input_parser.yy | 201 ---------------- ginac/matrix.cpp | 2 +- ginac/mul.cpp | 22 +- ginac/ncmul.cpp | 8 +- ginac/parser/lexer.cpp | 12 +- ginac/parser/parse_context.cpp | 21 +- ginac/parser/parse_context.hpp | 9 +- ginac/parser/parser.cpp | 45 +++-- ginac/parser/parser.hpp | 16 +- ginac/parser/parser_compat.cpp | 49 ++++ ginac/polynomial/cra_garner.cpp | 88 +++++++ ginac/polynomial/cra_garner.hpp | 12 + ginac/polynomial/debug.hpp | 29 +++ ginac/polynomial/gcd_euclid.tcc | 45 ++++ ginac/polynomial/mod_gcd.cpp | 165 +++++++++++++ ginac/polynomial/mod_gcd.hpp | 11 + ginac/polynomial/normalize.tcc | 93 +++++++ ginac/polynomial/remainder.tcc | 116 +++++++++ ginac/polynomial/ring_traits.hpp | 32 +++ ginac/polynomial/upoly.hpp | 129 ++++++++++ ginac/polynomial/upoly_io.cpp | 53 ++++ ginac/polynomial/upoly_io.hpp | 12 + ginac/power.cpp | 34 ++- ginac/structure.h | 2 +- ginac/wildcard.cpp | 2 +- ginac/wildcard.h | 2 +- ginsh/ginsh_parser.yy | 18 +- tools/Makefile.am | 2 +- 51 files changed, 1792 insertions(+), 898 deletions(-) create mode 100644 check/error_report.hpp create mode 100644 check/exam_cra.cpp create mode 100644 check/exam_mod_gcd.cpp create mode 100644 check/match_bug.cpp create mode 100644 check/parser_bugs.cpp create mode 100644 doc/examples/derivative.cpp delete mode 100644 ginac/input_lexer.h delete mode 100644 ginac/input_lexer.ll delete mode 100644 ginac/input_parser.yy create mode 100644 ginac/parser/parser_compat.cpp create mode 100644 ginac/polynomial/cra_garner.cpp create mode 100644 ginac/polynomial/cra_garner.hpp create mode 100644 ginac/polynomial/debug.hpp create mode 100644 ginac/polynomial/gcd_euclid.tcc create mode 100644 ginac/polynomial/mod_gcd.cpp create mode 100644 ginac/polynomial/mod_gcd.hpp create mode 100644 ginac/polynomial/normalize.tcc create mode 100644 ginac/polynomial/remainder.tcc create mode 100644 ginac/polynomial/ring_traits.hpp create mode 100644 ginac/polynomial/upoly.hpp create mode 100644 ginac/polynomial/upoly_io.cpp create mode 100644 ginac/polynomial/upoly_io.hpp Best regards, Alexei -- All science is either physics or stamp collecting.
Dear Alexei, what are your future plans for the gcd code in these new patches? At the moment the copyright/GPL headings are missing. You are using the hpp extension now for some header files. Why? And why are there files with a tcc extension? Also, obviously some code functionality is duplicated between polynomial/* and factor.cpp. It is no problem at the moment since the factor.cpp is still quite far from being finished. But in order to merge the duplicated code at some point in time, I'd just like to know what your plans with the polynomial/* code are. Regards, Jens Alexei Sheplyakov schrieb:
Could you please pull from
git://ffmssmsc.jinr.ru:443/varg/ginac.git master
The following changes since commit 4ee761760b3db8649b8b616256cd7466fe2cd033: Jens Vollinga (1): Fixed various bugs in multivariate factorization.
Hi! On Mon, Sep 22, 2008 at 10:34:18AM +0200, Jens Vollinga wrote:
what are your future plans for the gcd code in these new patches?
The plan is to replace PRS algorithm with something reasonable, i.e. with extended Zassenhaus algorithm for multivariate polynomails (I'm working on it now) and modular gcd algorithm for univariate ones. Also I'd like to implement more efficient representation of polynomails and rational functions.
At the moment the copyright/GPL headings are missing.
Do we really have to put legalistic boilerplate into *every* file?
You are using the hpp extension now for some header files. Why?
AFAIK this is the standard naming convention for C++ headers.
And why are there files with a tcc extension?
These files contain 'simple' functions implemented as templates. Whenever I need some particular function I #include corresponding tcc file. 'tcc' suffix prevents automake from compiling those files on their own.
Also, obviously some code functionality is duplicated between polynomial/* and factor.cpp.
\begin{nitpick} Some code in factor.cpp duplicates CLN functionality (it does not duplicate CLN's efficiency, though), so I don't think "duplication" is a problem. \end{nitpick}
It is no problem at the moment since the factor.cpp is still quite far from being finished.
Also factor.cpp is quite far from being optimal. In particular, univariate polynomials and operations on them are implemented in very inefficient way. struct UniPoly { cl_modint_ring R; list<Term> terms; // highest exponent first Why list? 1. It wastes 75% of memory for nothing good, because each node of a list needs to store pointers to the previous and next. 2. It spoils data locality. 3. Access to terms is O(n). Best regards, Alexei -- All science is either physics or stamp collecting.
Hi, Alexei Sheplyakov schrieb:
Also I'd like to implement more efficient representation of polynomails and rational functions.
please go ahead if you have a good idea! I couldn't come up with one, yet, because one usually ends up duplicating a lot of complicated code.
At the moment the copyright/GPL headings are missing.
Do we really have to put legalistic boilerplate into *every* file?
Don't know. Ask the lawyers. Maybe a short reference to some text file containing the licences etc. could do the trick.
You are using the hpp extension now for some header files. Why?
AFAIK this is the standard naming convention for C++ headers.
It is not the standard naming convention for GiNaC. It is just creating a mess.
And why are there files with a tcc extension?
These files contain 'simple' functions implemented as templates. Whenever I need some particular function I #include corresponding tcc file. 'tcc' suffix prevents automake from compiling those files on their own.
So, it is like a header?!?
Also factor.cpp is quite far from being optimal. In particular, univariate polynomials and operations on them are implemented in very inefficient way.
I know, thanks, see the email exchange with Richy. UniPoly is going to be removed and be replaced by cln stuff. Again, UniPoly is there because I didn't understand the necessary algorithms and their demands when I started to program the code. I wanted to be flexible, not efficient. Something like umodpoly will do the job. That was the reason why I asked about the future plans of the gcd code because I would then directly use umodpoly and would have to add the addional functionality I need into upoly.cpp. But if this code is still going to change heavily, then I will wait and make a copy-implementation in factor.cpp for the time being. I usually have only a few hours per week to work on factor.cpp so the progress will be as slow as it was in the past and I can wait for the polynomial/* code to mature. Regards, Jens
Hello, On Mon, Sep 22, 2008 at 05:12:34PM +0200, Jens Vollinga wrote:
You are using the hpp extension now for some header files. Why?
AFAIK this is the standard naming convention for C++ headers.
It is not the standard naming convention for GiNaC. It is just creating a mess.
I feel a bit silly -- I don't quite like arguing about names and such. Also I'm not particulary bound to any naming scheme, so let's discuss something more interesting and useful instead.
Also factor.cpp is quite far from being optimal. In particular, univariate polynomials and operations on them are implemented in very inefficient way.
I know, thanks, see the email exchange with Richy.
I'm aware of that. My point is a bit different: I was explaining why I didn't re-use the code from factor.cpp.
Something like umodpoly will do the job. That was the reason why I asked about the future plans of the gcd code because I would then directly use umodpoly and would have to add the addional functionality I need into upoly.cpp.
I was going to convert factor.cpp myself, but I didn't do that because - it might conflict with your changes, so this needs to be coordinated, - if my patches will be rejected that would be a waste time.
But if this code is still going to change heavily,
I don't think upoly (and associated functions) will change. Anyway, I'll update all call sites in a case of any incompatible change(s). Best regards, Alexei -- All science is either physics or stamp collecting.
The plan is to replace PRS algorithm with something reasonable, i.e. with extended Zassenhaus algorithm for multivariate polynomails (I'm working on it now) and modular gcd algorithm for univariate ones.
Also I'd like to implement more efficient representation of polynomails and rational functions.
Did you consider the option of writing GiNaC::ex to giac::gen converters and use giac factorization and gcd code instead? I guess it would save you a lot of time and headaches (I have worked and extensive amount of time on these functions, I know what I'm speaking of) without much loss of performance since the initial and final conversions do not take much time with respect to these algorithms. Moreover, GiNaC would have access to more advanced calculus functions like integration, limits, etc.
Hello, On Mon, Sep 22, 2008 at 07:34:22PM +0200, Bernard Parisse wrote:
Did you consider the option of writing GiNaC::ex to giac::gen converters and use giac factorization and gcd code instead?
I've tried to re-use gcd and factorization code from giac. But sanitazing that code was very boring, and I've gave up. I guess I'll re-use (some) ideas instead.
I guess it would save you a lot of time and headaches
At the expense of another headaches (such as messy error handling), unfortunately.
(I have worked and extensive amount of time on these functions, I know what I'm speaking of)
I'm aware of that.
without much loss of performance since the initial and final conversions do not take much time with respect to these algorithms.
That's not quite true. First of all, the conversion (at least) doubles the memory footprint. Secondly, GMP is quite slow when operating on small (i.e. native) integers (because it always allocates them on heap). Best regards, Alexei -- All science is either physics or stamp collecting.
Alexei Sheplyakov wrote:
Hello,
On Mon, Sep 22, 2008 at 07:34:22PM +0200, Bernard Parisse wrote:
Did you consider the option of writing GiNaC::ex to giac::gen converters and use giac factorization and gcd code instead?
I've tried to re-use gcd and factorization code from giac. But sanitazing that code was very boring, and I've gave up. I guess I'll re-use (some) ideas instead.
You never contacted me about using giac. Anyway, I don't see why you would have to "sanitaze" any code from giac : when I'm using a library, like GMP or NTL or whatever, I don't look at the coding style, I look at the performances and if I can easily call the functions. Using giac factor and gcd code is easy (using symbolic representation, it's a little bit more difficult if you work with polynomials).* // -*- compile-command: "g++ -g essai.cc -lgmp -lgiac" -*- #include "giac/giac.h" using namespace std; using namespace giac; int main(int ARGC, char *ARGV[]){ signal(SIGINT,giac::ctrl_c_signal_handler); giac::child_id=1; context ct; gen x("x",&ct); gen g(pow(x,4)-1); gen gf=factor(g,false,&ct); gen gg=gcd(g,pow(x,4)+2*pow(x,2)+1); cerr << "Factorization:" << gf << " GCD:" << gg << endl; } Building a converter from ex to gen should not be hard (perhaps 1 or 2 weeks of work, I might even consider to do it myself to be able to use some ginac functions someday).
I guess it would save you a lot of time and headaches
At the expense of another headaches (such as messy error handling), unfortunately.
What's wrong with giac error handling? Anyway, if you believe you will have less headaches to get good performance for gcd and factorization (like Lewis gcd benchmarks), I will certainly not loose time to convince you otherwise, I already lost too much time trying to convince sage developpers. I just find it stupid that people prefer to redevelop something already working and C++-usable.
(I have worked and extensive amount of time on these functions, I know what I'm speaking of)
I'm aware of that.
without much loss of performance since the initial and final conversions do not take much time with respect to these algorithms.
That's not quite true. First of all, the conversion (at least) doubles the memory footprint. Secondly, GMP is quite slow when operating on small (i.e. native) integers (because it always allocates them on heap).
If you had some look at giac, then you have perhaps observed that giac::gen use hardware integers for smalls integers (_INT_), GMP is used for int largers than 231 (_ZINT). Moreover most modular computations are done with int, not gen nor GMP.
Hello, On Mon, Sep 22, 2008 at 10:34:00PM +0200, Bernard Parisse wrote:
when I'm using a library, like GMP or NTL or whatever, I don't look at the coding style,
If the library does everything I need and is well maintained I don't care about code ugliness either (but I haven't seen any well maintained ugly code yet). But there are no such libraries in a real world. Typically some necessary functions/features are missing and nobody (except myself) is going to add them. The same applies to fixing bugs. That's why I avoid ugly code.
What's wrong with giac error handling?
giac always throws std::runtime_error. How do I distinguish what was exactly the reason of error?
I just find it stupid that people prefer to redevelop something already working and C++-usable.
\begin{sarcasm} Why did you write giac then? \end{sarcasm}
If you had some look at giac, then you have perhaps observed that giac::gen use hardware integers for smalls integers (_INT_)
giac::gen wastes at least 16 bytes to store that _INT_ class gen { public: short int subtype; // 2 bytes short int type; // yet another 2 bytes int * ref_count; // yet another 8 bytes to store the pointer, and extra 4 to store // the value. union { // atomic types int val; // immediate int (type _INT_) double _DOUBLE_val; // immediate float (type _DOUBLE_) mpz_t * _ZINTptr; // long int (type _ZINT) // So the union is aligned at 8 bytes, but only 1 of them is used [skipped the rest because it's way too ugly]
, GMP is used for int largers than 231 (_ZINT).
Wonderful! So giac::gen uses 16 bytes to store int8_t, and 24 bytes to store int32_t. I think this is absolutely inacceptable.
Moreover most modular computations are done with int, not gen nor GMP.
However public functions take gen as an input. Best regards, Alexei -- All science is either physics or stamp collecting.
If the library does everything I need and is well maintained I don't care
about code ugliness either (but I haven't seen any well maintained ugly code yet). But there are no such libraries in a real world. Typically some necessary functions/features are missing and nobody (except myself) is going to add them. The same applies to fixing bugs. That's why I avoid ugly code.
Why do you believe I would not fix a bug, add a function or whatever is meaningfull? The right question you should ask yourself is how much time you will spend to do it better/faster? Is that time really worth it?
What's wrong with giac error handling?
giac always throws std::runtime_error. How do I distinguish what was exactly the reason of error?
The reason is somewhat described in the what() method. Inside a call to gcd or factor with meaningfull arguments, if you get a std::runtime error, it will be a bug, that's why I don't see any reasons to bother about a more complex error scheme.
I just find it stupid that people prefer to redevelop something already working and C++-usable.
\begin{sarcasm} Why did you write giac then? \end{sarcasm}
Because there was no C++ CAS library available. I mean not a symbolic library, like GiNaC, but also with gcd, factorization, integration, calculus, etc. The difference is the N between GiNaC and giac.
If you had some look at giac, then you have perhaps observed that giac::gen use hardware integers for smalls integers (_INT_)
giac::gen wastes at least 16 bytes to store that _INT_
sizeof(gen)=16, sizeof(int)=4. But you won't suffer from it for intermediate computation, since for example modular 1-d gcd is done with array of ints. The extra space is only required for initial and final data, which is negligible for non trivial gcd and factorization.
, GMP is used for int largers than 231 (_ZINT).
Wonderful! So giac::gen uses 16 bytes to store int8_t, and 24 bytes to store int32_t. I think this is absolutely inacceptable.
It's a typo, 231 has no meaning, it is of course 2^31 (2**31). Otherwise I could easily improve the timings you can see there http://www-fourier.ujf-grenoble.fr/~parisse/giac/benchmarks/benchmarks.html Unfortunately it won't be that easy:-)
Moreover most modular computations are done with int, not gen nor GMP.
However public functions take gen as an input.
I don't see why you would need to call functions that do not take gen as input since GiNaC representation for polynomials is symbolic, but many polynomials functions are also exported, and, if not, I can just add them to a header if someone needs it. I keep thinking it could be interesting to have some complementarity between GiNaC and giac.
Hello! On Tue, Sep 23, 2008 at 06:10:10PM +0200, Bernard Parisse wrote:
Why do you believe I would not fix a bug
I've already reported (at least) 2 bugs: 1. giac::gen uses 24 bytes to store a plain integer. As a consequence a lot of memory is wasted when storing polynomials. Instead of fixing it you argue 6x overhead is "negligible". 2. giac throws std::runtime_error on every error condition, so I need to parse .what() in order to determine what was the reason of an error (and continue if it's not fatal *for my program*). You refused to fix it because you "don't see any reason to bother about a more complex error scheme". Given such an attitude I don't expect you'll accept any patches, let alone fixing these (and other) issues yourself. Anyway, here's some more bug reports: 3. giac (version 0.8.0) fails to build from the source. See the attached configuration and compilation logs (config.log.bz2 and build.log, respectively). 4. Please don't put hard links into the tarball. Not every file system supports them (even on UNIX), so unarchiving fails. Figuring out what's going on is a bit annoying.
add a function or whatever is meaningfull?
The notion `meaningfull' is very subjective. For example, I think memory efficiency and proper error handling is mandatory, you have a different opinion.
The right question you should ask yourself is how much time you will spend to do it better/faster?
That question is certainly incorrect. The right questions are 1. What is the bottleneck in my calculations? a) Computation of GCD's when polynomials in questions are relatively prime. b) Unnecessary memory allocations/deallocations. c) Memory overhead due to non-optimal data representation. 2. What should I do to fix that? a) Implement modular GCD algorithms. b) Wipe out GiNaC::numeric and use proper CLN types instead. Allocating 40 bytes on heap to store hardware integer (or floating point) number is just stupid. c) Need to think (and experiment) about that. 3. Should I bother to be faster then NTL, Singular, CoCoA, giac (you name it)? No. As long as GCD computation is no longer a bottleneck, that is. 4. Should I re-use some existing code? That would be nice, but every existing polynomial library happens to have at least on of the following serious drawbacks: a) The API assumes polynomial arithmetics to be the center of the Universe. No doubt, GCD and factorization is important, but for me it's just a tool. b) The code trades memory efficiency for speed, for my problems it's appropriate to do the other way around. 5. (Last, but not least) Should I fiddle with third-party software which even fails to build from the source? No.
The reason is somewhat described in the what() method.
The need to parse what() is exactly what I call (to put it very mildly) "messy error handling".
Inside a call to gcd or factor with meaningfull arguments, if you get a std::runtime error, it will be a bug,
It's not fatal at all. I can just continue the calculation with non-factored polynomials (non-canonicalized rational expressions).
\begin{sarcasm} Why did you write giac then? \end{sarcasm}
Because there was no C++ CAS library available.
What was wrong with non-C++ ones?
I mean not a symbolic library, like GiNaC, but also with gcd, factorization, integration, calculus, etc.
You've re-written half of CLN and GiNaC from scratch (in a somewhat inefficient way) instead of adding missing functionality. Apparently that was not "stupid". \begin{sarcasm} Oh, wait. I got it. It's not stupid when *you* "redevelop something already existent and C++-usable". On the other hand, if somebody else does the same thing, it's definitely stupid! \end{sarcasm}
giac::gen wastes at least 16 bytes to store that _INT_
sizeof(gen)=16, sizeof(int)=4.
That's architecture dependent. $ uname -m x86_64 $ cat test.cc struct gen { short int t; short int st; int* rc; union { int i; double d; void* p; }; }; int main() { return sizeof(struct gen); } $ g++ test.cc $ ./a.out $ echo $? 24
But you won't suffer from it for intermediate computation, since for example modular 1-d gcd is done with array of ints.
That's not quite true, since giac stores those arrays in a very inefficient way.
The extra space is only required for initial and final data, which is negligible for non trivial gcd and factorization.
Actually this trivial GCD case is the main reason why I bother to rewrite GiNaC GCD code.
It's a typo, 231 has no meaning, it is of course 2^31 (2**31).
Either way 20 bytes are wasted for nothing good at all.
Otherwise I could easily improve the timings you can see there http://www-fourier.ujf-grenoble.fr/~parisse/giac/benchmarks/benchmarks.html
First of all, those timings are not very useful. It's (relatively) easy to design a GCD algorithm which works well on certain types of inputs, but it's very difficult make one which works reasonably on any inputs. Secondly, I'm convinced you can improve them (at least on x86_64) by making gen more memory efficient, i.e. class gen { int type_tag; int refcount; union { long i_val; double d_val; // ... void* ptr; }; }; (There's still 2x memory overhead, but that's certainly better then current 6x one) This can give some speedup even if your inputs are small enough: using less memory gives more chances to fit into CPU cache(s).
I don't see why you would need to call functions that do not take gen as input since GiNaC representation for polynomials is symbolic,
That's not true any more. Best regards, Alexei -- All science is either physics or stamp collecting.
Dear Jens, Could you please either a) pull from git://ffmssmsc.jinr.ru:443/varg/ginac.git master or b) tell me what is exactly wrong about my patches (to make it absolutely clear, I mean: what's wrong about the actual code) so I can fix them and get them merged into the official repository. Sorry for being so annoying, Alexei -- All science is either physics or stamp collecting.
Dear Alexei, actually, I was just writing an email to you about that ... :-) Alexei Sheplyakov schrieb:
a) pull from
git://ffmssmsc.jinr.ru:443/varg/ginac.git master
will do this (with rebase!) as soon as the following issue is settled.
b) tell me what is exactly wrong about my patches (to make it absolutely clear, I mean: what's wrong about the actual code) so I can fix them and get them merged into the official repository.
Alexei Sheplyakov schrieb:
On Mon, Sep 22, 2008 at 05:12:34PM +0200, Jens Vollinga wrote:
You are using the hpp extension now for some header files. Why? AFAIK this is the standard naming convention for C++ headers. It is not the standard naming convention for GiNaC. It is just creating a mess.
I feel a bit silly -- I don't quite like arguing about names and such. Also I'm not particulary bound to any naming scheme, so let's discuss something more interesting and useful instead.
I agree, it feels silly to argue about that. But still, I think we should keep the source code in a uniform look. Would it hurt you much to rename *.hpp and *.tpp into *.h? Regards, Jens
Hello, On Tue, Sep 30, 2008 at 09:44:39AM +0200, Jens Vollinga wrote:
will do this (with rebase!) as soon as the following issue is settled.
This time it's a real merge anyway (as opposed to a fast-forward), since you've already published a commit which is not in my *public* repository.
I agree, it feels silly to argue about that. But still, I think we should keep the source code in a uniform look. Would it hurt you much to rename *.hpp and *.tpp into *.h?
I'll stick to '*.cpp' and '*.h' naming in the future. Renaming already existing files is a bit messy, though, because I've got a number of bug fixes and improvements which touch those '*.tcc' and '*.hpp' files. Hence I propose to leave them as is (at least for now). If you think it's absolutely necessary to rename them let's do it after merging those (to be published really soon) patches. What do you think? Best regards, Alexei -- All science is either physics or stamp collecting.
Dear Alexei, Alexei Sheplyakov schrieb:
I'll stick to '*.cpp' and '*.h' naming in the future. Renaming already existing files is a bit messy, though, because I've got a number of bug fixes and improvements which touch those '*.tcc' and '*.hpp' files. Hence I propose to leave them as is (at least for now). If you think it's absolutely necessary to rename them let's do it after merging those (to be published really soon) patches.
What do you think?
no problem to rename them later when the code has stabilized. Regards, Jens
participants (3)
-
Alexei Sheplyakov
-
bernard.parisse
-
Jens Vollinga