View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0001167 | OpenFOAM | Bug | public | 2014-02-19 09:07 | 2015-03-08 22:38 |
Reporter | matthias | Assigned To | henry | ||
Priority | none | Severity | feature | Reproducibility | N/A |
Status | resolved | Resolution | fixed | ||
Platform | Linux | OS | Other | OS Version | (please specify) |
Summary | 0001167: build system improvement | ||||
Description | Hi OpenFOAM Team, this is not a bug but a suggestion for improvement of the build system in OF. Often OF is used with HPCs which already provide the additional libraries shipped with OF. I would suggest to integrate a mechanism where the admin/user can choose if pre-installed or own libs will be used (something like OPENMPI and SYSTEMOPENMPI). Normally, on HPCs this is done using a module environment. The feature would be beneficial for global installations of mpfr,gmp,cgal and boost libs. At the moment I changed the ARCH_PATHs by hand, so this is not an urgent issue. Maybe something for the next major release? Best regards Matthias | ||||
Tags | No tags attached. | ||||
|
Could you be a bit more specific about what you would like changed to what in the OpenFOAM shell files and environment variables? |
|
Just in case Matthias doesn't answer: My guess is that the request refers to either the option to have "SYSTEMMPICH" (guessing here) or simply a generic "SYSTEMMPI" (most likely). In case of a generic "SYSTEMMPI", the original request likely refers to something that would allow setting up the system MPI environment based on "prefs.sh" or similar, by using a default "MPI_ARCH_PATH". Then again, probably something very similar to how "INTELMPI" is set-up in "settings.sh" (which depends on Intel's "MPI_ROOT" environment variable). |
|
I don't have a problem with the addition of this option if it generally needed. If you or Matthias or someone else can provide a patch it would help. Incidentally what is the advantage of MPICH over OpenMPI for OpenFOAM? I have not tried MPICH for many years. |
|
|
|
Attached is a "tarbomb" with the following files (targeting 2.3.x): etc/bashrc etc/config/settings.csh etc/config/settings.sh etc/cshrc wmake/rules/General/mplibSYSTEMMPI Note: I tested with tcsh the modifications made for the ".csh" scripts. Although I probably haven't more than 3 hours of total experience with csh in my whole life... The new "SYSTEMMPI" option is introduced and it literally demands from the user that the necessary/specific environment variables are set. Example, for using the system's MPICH2 on Ubuntu 14.04: export MPI_ROOT=/usr export MPI_ARCH_FLAGS="-DMPICH_SKIP_MPICXX" export MPI_ARCH_INC="-I/usr/include/mpich" export MPI_ARCH_LIBS="-L/usr/lib/x86_64-linux-gnu -lmpich" these 4 have to be set either in "etc/prefs.sh" or before sourcing "bashrc". The reason why I chose to nag so much the user (check the entry for SYSTEMMPI in "settings.*" to see what I mean) is because these settings should not be taken lightly, since without these being properly defined, it simple won't work. In addition, it's following the methodology used for the custom GCC, when it's not built yet. As for checking one variable at a time... I wanted to keep it consistent with the csh version, which I'm not very experienced with. ------------------- I did try to figure out if it was possible to create an entry for SYSTEMMPICH, but mpich2 does not provide the same facilities as Open-MPI, namely something specific like "mpicc -show:compile". I even thought of using "mpicc" and "mpic++" as the defaults for building OpenFOAM, but the MPICH2 installed in Ubuntu is a clear example of why we should not do this. For example, if we run: mpicc -show we get this: cc -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -I/usr/include/mpich -L/usr/lib/x86_64-linux-gnu -lmpich -lopa -lmpl -lrt -lcr -lpthread The problem? There's a "-g" hard-coded into it! I.e., it produces debugging information by default... -------------- As for MPICH2 vs Open-MPI: as far as I can figure out, it seems that its mainly for two reasons: 1- MPICH2 implemented the MPI-3 standard first. Open-MPI is still a bit new on this topic. 2- MPICH2 is used by hardware manufacturers for specific network hardware. Hence many HPC platforms using it by default. |
|
I also worked out that a long time ago thate mpicc is simply not appropriate for building OpenFOAM but it would be useful if there were a standard way in the MPI versions to find out where the libraries and includes are. OpenMPI does seem to be the best organised in this regard. |
|
Thanks Bruno. Resolved by commit ae9a670c99472787f3a5446ac2b522bf3519b796 |
Date Modified | Username | Field | Change |
---|---|---|---|
2014-02-19 09:07 | matthias | New Issue | |
2015-03-01 20:24 | henry | Note Added: 0003927 | |
2015-03-01 21:02 | wyldckat | Note Added: 0003931 | |
2015-03-01 21:24 | henry | Note Added: 0003933 | |
2015-03-07 18:32 | wyldckat | File Added: modified_files.tar.gz | |
2015-03-07 18:47 | wyldckat | Note Added: 0004002 | |
2015-03-08 19:58 | henry | Note Added: 0004045 | |
2015-03-08 22:38 | henry | Note Added: 0004057 | |
2015-03-08 22:38 | henry | Status | new => resolved |
2015-03-08 22:38 | henry | Resolution | open => fixed |
2015-03-08 22:38 | henry | Assigned To | => henry |
2015-03-24 00:17 | liuhuafei | Issue cloned: 0001629 |