View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0002199 | OpenFOAM | Bug | public | 2016-08-18 14:40 | 2016-08-23 22:06 |
Reporter | bijan darbari | Assigned To | henry | ||
Priority | normal | Severity | minor | Reproducibility | always |
Status | closed | Resolution | fixed | ||
Platform | GNU/Linux | OS | Ubuntu | OS Version | 14.10 |
Product Version | dev | ||||
Fixed in Version | dev | ||||
Summary | 0002199: Float Point Exception in DPMFoam when injection of parcel with high "nParticle" starts | ||||
Description | The DPMFoam solver crashes just after the injection, if we inject parcels with high "nParticle" ( "nParticle=1e4" or more ). The following error appeared after the crashes: " Evolving kinematicCloud Solving 2-D cloud kinematicCloud #0 Foam::error::printStack(Foam::Ostream&) at ??:? #1 Foam::sigFpe::sigHandler(int) at ??:? #2 ? in "/lib/x86_64-linux-gnu/libc.so.6" #3 ? at ??:? #4 ? at ??:? #5 ? at ??:? #6 ? at ??:? #7 ? at ??:? #8 ? at ??:? #9 ? at ??:? #10 ? at ??:? #11 __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6" #12 ? at ??:? Floating point exception (core dumped) " If run the solver in Debug mode: " Evolving kinematicCloud Solving 2-D cloud kinematicCloud #0 Foam::error::printStack(Foam::Ostream&) at ??:? #1 Foam::sigFpe::sigHandler(int) at ??:? #2 ? in "/lib/x86_64-linux-gnu/libc.so.6" #3 ? at ~/OpenFOAM/OpenFOAM-dev/src/OpenFOAM/lnInclude/ops.H:76 #4 ? at ~/OpenFOAM/OpenFOAM-dev/src/lagrangian/intermediate/lnInclude/KinematicParcel.C:347 #5 ? at ~/OpenFOAM/OpenFOAM-dev/src/lagrangian/intermediate/lnInclude/CollidingParcel.C:90 #6 ? at ~/OpenFOAM/OpenFOAM-dev/src/lagrangian/intermediate/lnInclude/InjectionModel.C:511 #7 ? at ~/OpenFOAM/OpenFOAM-dev/src/lagrangian/intermediate/lnInclude/InjectionModelList.C:185 #8 ? at ~/OpenFOAM/OpenFOAM-dev/src/lagrangian/intermediate/lnInclude/KinematicCloud.C:112 #9 ? at ~/OpenFOAM/OpenFOAM-dev/src/OpenFOAM/lnInclude/autoPtrI.H:116 #10 ? at ~/OpenFOAM/OpenFOAM-dev/applications/solvers/lagrangian/DPMFoam/DPMFoam.C:78 #11 __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6" #12 ? at ??:? Floating point exception (core dumped) " | ||||
Steps To Reproduce | Run the attached case with "DPMFoam", set "coupled true;" and "nParticle=1e4" (or more ) in "kinematicCloudProperties". | ||||
Additional Information | I think that there is not a divergence problem for this case as the error appears just before the starting of carrier phase solution. The similar problem exist in "reactingParcelFoam", "MPPICFoam" ,.... | ||||
Tags | No tags attached. | ||||
|
|
|
I haven't opened the case yet, but I'll ask you about a few details, just to be certain of what to expect when looking at the case: 1- What is the case meant to simulate? 2- What are the diameters of the particles, if there are any diameters? 3- Do the particles interact with each other? 4- What are the cell sizes, at least near the injector? 5- Is the injector a single point, multiple points or whole patch? 6- What is the injection rate? 7- What is the initial time step? Because what I suspect is happening is that the particle modelling set-up is configured to have particles with diameters and that the particles should kinetically interact with each other directly. If this is the case, you could possibly have 2 or more particles superimposing on each other, leading to the distances between the centres to be be zero, hence resulting in a division by zero. Super-imposition can either occur because the 1400 particles are all injected at the same exact instant and location, or because the initial time step of the flow simulation is larger than the flow rate into the domain, resulting in X > 1 particles being injected in the same exact location. |
|
Dear wyldckat 1- I want to simulate nanofluid flow. this flow contains solid particles with diameter 100nm or less and uniform volume fraction about 0.01 - 0.05. I have to use high nParticle as particle diameter is very low and each particle has very low volume. 2- The diameter of particles is uniform and 50e-9 m. 3- No. because there is very high distance between nanoparticles and physically the chance of particle interaction is very low. for my case if use nParticle=1, this distance is in order of 1000nm . for high nParticle this is much larger than 1000 nm. 4- The mesh is 2D uniform, all the cell size are (25e-6 m)*(25e-6 m). 5- I used patch injection. 6- The rate of parcel injection is 5e6 parcel per second. So in each time step 50 parcels inject at inlet patch, as the time step of carrier flow is 1e-5 sec. 7- The time step of carrier flow is constant 1e-5 sec. The Courant number of carrier flow is about 0.25. ** For better understanding the problem, I uploaded the second simpler test case ( case2(single-point).zip ). In the second test case, one parcel with “nParticle=1e6” injected from a point in center of channel. If you run this case, the solver will crashed with similar error just after the injection. note that there is only one parcel in the domain during the solution of second test case .** Best Regards |
|
|
|
I briefly checked this and the crash is caused by numerical instability combined with the use of cellValueSourceCorrection. The particles are so small that their velocity relaxation time is very short and they reach the fluid velocity already during the first particle time-step (3e-6 s). When Euler integration is used, the momentum transfer term is calculated as dUTrans=dt*(Feff.Sp()*(Ures.average() - Uc_)), where Ures.average() is the mean of the particle velocity before and after the time step. If dt is too large, this expression will greatly overpredict the momentum transfer. Now when cellValueSourceCorrection is activated, the continuous phase velocity is updated using the much too large dUTrans, which causes the crash. There are at least a couple of things that could be done to help the issues: 1. Is there a need to couple the momentum transfer from particles to fluid? If no, just turning cellValueSourceCorrection off might work ok, because dUTrans is not needed. In any case turning cellValueSourceCorrection off prevents crashing during particle iteration, but the momentum sources from the particles will be wrong and the simulation can crash later on. 2. If there is need to couple, then there are at least two alternatives. The first alternative is to specify smaller maxCo, meaning that the particles take smaller time steps. However, very small time steps can be needed. The other alternative is to switch to analytical integration, which does not overpredict the momentum transfer. 3. In the second test case, when nParticle is 1e6, the volume fraction of particles exceeds unity -> momentum transfer will be naturally very large. With 1e4-1e5 parcels (volume fraction 0.01-0.1) and analytical integration the simulation works ok. |
|
I looked more closely what the analytical integration does and I think that there is a small bug in the implementation regarding cases where beta=0. This bug only affects situations, where analytical integration is used for the U-equation. In Analytical.C, if beta=0, the integration sets retValue=phi. However, lim beta->0 alphaBeta/beta + (phi - alphaBeta/beta)*exp(-beta*dt) = lim beta->0 phi*exp(-beta*dt)+alphaBeta*((1-exp(-beta*dt))/beta) = phi+alphaBeta*dt So the else-branch should be replaced with else { retValue.value() = phi + alphaBeta*dt; retValue.average() = 0.5*(phi + retValue.value()); } In case of T-equation alphaBeta is computed as alpha*beta, so if beta=0 then also alphaBeta=0, so the behavior should not change. In U-equation beta=0 if Feff.Sp()=0, but alphaBeta is not necessarily zero. (eg. there are non-coupled forces such as gravity) |
|
Dear tniemi Thanks very much for your kind reply Your description was very helpful. I need to couple the momentum from particles to fluid. Also I have some little question and I would be grateful if you could answer this questions. 1- Is analytical integration scheme appropriate when random forces (like brownian, turbulent dispersion ,... ) acts on particles?? 2- What does "cellValueSourceCorrection" really do?? and what will be happen in result if set "off" ?? 3- How does "dt" calculated??, from theory I think It should be equal to particle momentum relaxation time. Best Regards |
|
1. The analytical scheme is mainly meant for the temperature equation, which can be typically solved analytically. (assuming eg. that heat transfer coefficient does not depend on temperature) I think it can be used for the momentum equation too, but in that case it is only approximately correct same as Euler. The key difference here is that with analytical method the momentum transfer is not overpredicted when too large time steps are taken. I guess the Euler method could be also changed to calculate the momentum transfer as a difference between particle momentum before and after the time step. This way there would be no overprediction. However, both Euler and analytical are not very accurate with large dt. 2. Normally during the particle iteration the continuous phase properties are frozen and the momentum sources from particles are applied to the next fluid solution. If cellValueSourceCorrection is true, the continuous phase properties are modified already after each particle time-step to roughly take into account the coupling from the particles to the fluid. This can be sometimes beneficial, but it can also cause numerical problems as there is no under-relaxation applied. Eg. with momentum transfer the continuous phase velocity is updated as Uc_ += td.cloud().UTrans()[cellI]/massCell(cellI), where UTrans is the momentum transfer from the particles within the particle time-step. 3. By default dt is only calculated based on the Courant number of the particles, ie. it depends on the particle velocity and mesh size. This is usually ok for larger particles. In your case the Courant limit is much larger than the relaxation time so it is not so good, if you want to accurately predict the relaxation. The Courant limit can be changed by specifying maxCo value and I think the default is 0.3. In any case, you simulation case is quite tricky for the default integration methods and time-step control, because they are not designed to handle such varying time-scales. (Note that also eg. the turbulent time-scale is not by default taken into account in determining the dt. If dt is larger than the turbulent scale, turbulence is ignored.) Edit: To clarify, except for the small issue in Analytical.C, I think there is no bug in the code. It is just that the present implementation requires that particle dt is small enough and at the moment it can be only specified by giving a Courant number limit. |
|
Thanks Timo, I have included your patch: commit 68f57d312655f45fcdb32cea3fdfb1df6a4c0b49 |
Date Modified | Username | Field | Change |
---|---|---|---|
2016-08-18 14:40 | bijan darbari | New Issue | |
2016-08-18 14:40 | bijan darbari | File Added: Case.zip | |
2016-08-19 01:09 | wyldckat | Note Added: 0006703 | |
2016-08-19 06:18 | bijan darbari | Note Added: 0006704 | |
2016-08-19 06:19 | bijan darbari | File Added: case2(single-point).zip | |
2016-08-19 06:23 | bijan darbari | Note Edited: 0006704 | |
2016-08-19 06:25 | bijan darbari | Note Edited: 0006704 | |
2016-08-19 06:29 | bijan darbari | Note Edited: 0006704 | |
2016-08-19 06:51 | bijan darbari | Note Edited: 0006704 | |
2016-08-19 19:46 | bijan darbari | Note Edited: 0006704 | |
2016-08-22 13:43 | tniemi | Note Added: 0006715 | |
2016-08-22 13:46 | tniemi | Note Edited: 0006715 | |
2016-08-23 07:17 | tniemi | Note Added: 0006719 | |
2016-08-23 07:26 | bijan darbari | Note Added: 0006720 | |
2016-08-23 07:31 | bijan darbari | Note Edited: 0006720 | |
2016-08-23 08:48 | tniemi | Note Added: 0006722 | |
2016-08-23 08:58 | tniemi | Note Edited: 0006722 | |
2016-08-23 22:06 | henry | Note Added: 0006732 | |
2016-08-23 22:06 | henry | Status | new => closed |
2016-08-23 22:06 | henry | Assigned To | => henry |
2016-08-23 22:06 | henry | Resolution | open => fixed |
2016-08-23 22:06 | henry | Fixed in Version | => dev |