View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0002861||OpenFOAM||Contribution||public||2018-02-28 20:51||2018-03-15 00:47|
|Platform||GNU/Linux||OS||Other||OS Version||(please specify)|
|Fixed in Version|
|Summary||0002861: Speed up second start of cases with meshToMesh-interpolation with caching|
|Description||When running large cases with meshToMesh-interpolation the initialization of the interpolation can take several minutes. Especially if multiple iteration are needed to set up a case this can take very long. But the work to set up the interpolation is always the same. So things can be speeded up if after the first start the result of the interpolation are saved and read|
|Steps To Reproduce||In $FOAM_TUTORIALS/heatTransfer/chtMultiRegionSimpleFoam/heatExchanger the most time during case setup is needed by the meshToMesh-interpolation|
|Additional Information||The pull request https://github.com/OpenFOAM/OpenFOAM-dev/pull/18 offers a solution to this: after finishing each constructor writes the data that was calculated. It also calculates a hash for the source and the target mesh and writes that to another file. |
If the constructor finds these files it calculates the hashes of the files and compares them to the stored hashes. Only if these hashes match on all processors the data is used (this ensures that the interpolation data is not used even if the mesh is "only" scaled). Then the data is read and the data structures are initialized with it instead of recalculating it
|Tags||No tags attached.|
Rather than cache the interpolation structure it would be better if the algorithm were optimized. In the distant past all mesh data (cell volumes, face areas etc.) was written and read in rather than recalculated but this often causes problems and is limited by the IO performance, particularly in parallel, and there are already many complaints about the number of files OpenFOAM generates.
Have you tried profiling the meshToMesh-interpolation code and optimizing it? I think that the current implementation is far from optimal and there is plenty of scope for improvement.
||Waiting for feedback from reporter|