View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0003985 | OpenFOAM | Bug | public | 2023-06-09 10:41 | 2023-06-09 17:29 |
Reporter | demichie | Assigned To | will | ||
Priority | normal | Severity | minor | Reproducibility | have not tried |
Status | closed | Resolution | no change required | ||
Platform | Unix | OS | Ubuntu | OS Version | 22.04 |
Product Version | dev | ||||
Summary | 0003985: error in derivative computation | ||||
Description | Frictional pressure prime, i.e. the derivative with respect to alpha of the frictional pressure, in computed in the following way in JohnsonJacksonFrictionalStress.C : return Fr_* ( eta_*pow(max(alpha - alphaMinFriction, scalar(0)), eta_ - 1) *(alphaMax - alpha) + p_*pow(max(alpha - alphaMinFriction, scalar(0)), eta_) )/pow(max(alphaMax - alpha, alphaDeltaMin_), p_ + 1); When (alphaMax-alpha)<alphaDeltaMin, the denominator of the frictional pressure is constant and the derivative is null, but this is not considered in the expression above. In addition, the sign in front of the second term (before "p_"), should be negative. It should be changed in the following way (please not the "pos" term): return Fr_* ( eta_*pow(max(alpha - alphaMinFriction, scalar(0)), eta_ - 1) *max(alphaMax - alpha, alphaDeltaMin_) - p_*pow(max(alpha - alphaMinFriction, scalar(0)), eta_)*pos(alphaMax - alpha - alphaDeltaMin_) )/pow(max(alphaMax - alpha, alphaDeltaMin_), p_ + 1); | ||||
Additional Information | The same formulation is used in JohnsonJacksonSchaefferFrictionalStress.C | ||||
Tags | No tags attached. | ||||
|
I think you're probably right, but to change it we would need evidence that the original is indeed wrong and that your change is correct. With derivative computations like this, that usually means comparing against a finite difference gradient constructed using two close evaluations of the value function. Have you tested in this way? If so, how can we reproduce/verify the comparison? Also, your implementation now calls pow four times. This is expensive. Call it twice, store the results, and calculate the adjacent powers by multiplication and division |
|
The number of "pow" is the same as before (3), not 4. What is added is a "pos", used to set a term to zero when the result of "max" is constant. The origin of the problem has nothing to do with finite difference gradient. It is the analytical derivative of the formulation for the frictional pressure implemented in the same file. Actually, there are two problems, both in the analytical expression of the derivative of the ration defining the frictional pressure. One is the sign that I changes, the other is the term that should be null, for which I added the "pos" factor. If you think it can be useful, I can attach a pdf with the expression of the frictional pressure and the expression of its derivative. |
|
Missread a pos for a pow, sorry. Same number of pow-s. Fine. Two would be better, but it's not a regression. I'm saying the finite difference is how to *test* it. I don't want a PDF. I want a code snippet that runs the frictionalPressure function at (alpha + 1e-7) and (alpha - 1e-7), divides the difference it by 2e-7 (i.e., calculates a finite difference) and prints it. Then it runs frictionalPressurePrime and prints that, too. Do this for a range of alphas and plot a graph of the result. The lines should be the same. I've seen these sorts of functions be wrong too many times now. I don't believe them unless I have two independent implementations producing the same values. |
|
I'm sorry but I don't understand the point. I will write a python or matlab code, if you need it, but I think you're missing something. The function needs to evaluate the derivative of a term with the form max(x1-x,dx)^p (this is what there is at the denominator of the friction pressure). This denominator is constant and equal to dx when x>x1-dx, so its derivative has to be zero. The way it is implemented now do not consider the constant part, and the derivative of the denominator is not 0 when it should be. |
|
As regard the change in the sign, the derivative of f(x)/g(x) is [f'(x)g(x)-f(x)g'(x)]/g^2(x). There is a minus sign between the two terms at the numerator. In the file there is x "+". |
|
I can also double check this as these models are something that we regularly use. If I recall correctly I have checked these expressions, but it has been several years ago so good to double check again. |
|
Thank you for the double check! I will also double check if my suggested change is correct. |
|
I am not missing anything. You do not seem to understand the concept of verifying your work. We all make mistakes, so to guard against them we do things multiple ways and ensure that we get the same answer. You have not done this. You have done the derivation on a bit of paper, written some code, and assumed it is correct. It is not. You have made a mistake. See the attached graph. Your function is correct beyond the alpha limits, but is incorrect within them. This is substantially worse than the existing implementation. If you had worked to verify the result by some other means (e.g. an independent calculation of the derivative by a finite difference, as suggested) then you would have noticed this. |
|
The fix is trivial. You are wrong about your minus sign. The function also has a negative alpha in it, so the minus signs cancel. The other issue is that the existing implementation is incorrect in the higher limit, but in being incorrect it makes the gradient continuous. This may have been done intentionally for reasons of numerical stability. Discontinuities in functions like this can cause instability as the solver oscillates around the step change. In most cases, I would also expect the value function to have been modified so that it was consistent with the continuous gradient, but it's possible that this was not necessary in this case. I don't know. This second consideration, again, becomes an issue of evidence. It's possible that the gradient is incorrect intentionally, and that this was required by whatever simulations were used in the initial development of this model. But you also may have examples that require the corrected form. Is that true? What cases do you have where the accuracy or stability are improved by the corrected formulation? |
|
What I was trying to say is that, before moving to check with the finite difference if the new formulation I proposed is correct or wrong, it is important to agree if the one implemented now is correct or not. Before verifying a proposed change, I think it is important to agree if the actual implementation is correct or not. And I was trying to explain why the actual implementation, to me, is wrong. And to show that it is wrong I do not think there is any need to use finite differences, because to me the problem is clear because there are no terms in the derivative that accounts for the fact that the derivative of the denominator is zero above a threshold. Am I wrong? I am really trying to help, not to accuse anyone to do something wrong. I really appreciate your work and I am sorry if there is any misunderstanding. |
|
I have a test case that blows up but it is obtained with a version of multiphaseEuler I have modified in a lot of parts. So I cannot provide you a test case to use with multiphaseEuler. But I analyzed the simulation and I've seen that the problem occurs when alfa become really close to alfa_max. So, I checked where the source of the problem could be and I found the problem with the derivative of the frictional pressure. But if you think it is ok in this way, no problems for me. I will try the fix on my modified version of the module. |
|
You are right that the denominator derivative is not analytically correct. My first message was "I think you're probably right". But you absolutely should have checked your work before telling us "It should be changed in the following way". I asked for evidence that your code was correct, and you did not provide that. If that happens again, the report will simply be closed. As it happens, the evidence also showed potential reasoning behind the existing incorrect-ness; that it makes the gradient continuous. So, it becomes about a different sort of evidence; namely simulation performance, accuracy and stability. What cases do you have that benefit from this change? |
|
(messages out of order, sorry, didn't refresh the page) Simulations from a "modified" multiphaseEuler are of no use to me. I need to be able to reproduce any problems that you might be having in your case, so it would need to run with OpenFOAM as released. I can't be sure that the problem is here and not somewhere else in multiphaseEuler, modified or otherwise. I think I have no choice but to assume that the gradient incorrect-ness has been done intentionally in order to keep it continuous for reasons of numerical stability, and close the report. If you can produce a case that obviously benefits from an analytically correct gradient, and which runs in OpenFOAM as released, then please feel free to open a new report in order to make said case known to us. |
Date Modified | Username | Field | Change |
---|---|---|---|
2023-06-09 10:41 | demichie | New Issue | |
2023-06-09 13:04 | will | Note Added: 0013032 | |
2023-06-09 13:22 | demichie | Note Added: 0013033 | |
2023-06-09 15:21 | will | Note Added: 0013034 | |
2023-06-09 15:43 | demichie | Note Added: 0013035 | |
2023-06-09 15:46 | demichie | Note Added: 0013036 | |
2023-06-09 15:54 | tniemi | Note Added: 0013037 | |
2023-06-09 16:02 | demichie | Note Added: 0013038 | |
2023-06-09 16:34 | will | Note Added: 0013039 | |
2023-06-09 16:34 | will | File Added: graph.pdf | |
2023-06-09 16:44 | will | Note Added: 0013040 | |
2023-06-09 16:45 | demichie | Note Added: 0013041 | |
2023-06-09 17:10 | demichie | Note Added: 0013042 | |
2023-06-09 17:18 | will | Note Added: 0013043 | |
2023-06-09 17:28 | will | Note Added: 0013044 | |
2023-06-09 17:29 | will | Assigned To | => will |
2023-06-09 17:29 | will | Status | new => closed |
2023-06-09 17:29 | will | Resolution | open => no change required |