View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0003361 | OpenFOAM | Bug | public | 2019-10-07 11:12 | 2021-12-07 12:25 |
Reporter | niklas.wikstrom | Assigned To | henry | ||
Priority | normal | Severity | major | Reproducibility | always |
Status | resolved | Resolution | fixed | ||
Platform | x86_64 | OS | Fedora | OS Version | 29 |
Product Version | dev | ||||
Summary | 0003361: snappyHexMesh region refinement contaminates entire geometry facet | ||||
Description | A distance refinement region sometimes refines entire triSurface facet. This happens when a refinement region entry specifies a distance that crosses at least one of the facet's edges. The problem occurs in parallel or serial runs and with OpenFOAM versions 5 to dev (7). On a geometry that has this problem it is allways reproducable, but manually generating a similar geometry to reproduce the problem is difficult. | ||||
Steps To Reproduce | Extract attached snappyBugReportRoom.tgz, run blockMesh and snappyHexMesh. | ||||
Additional Information | The image file, attached, shows the overly refined facet and visualizes the distance refinement geometry object (pink), pointed at by the yellow line. | ||||
Tags | No tags attached. | ||||
|
|
|
Forgot to add: This issue is caused by the call to surfaces.setMinLevelFields(shells); in snappyHexMesh.C. Removing the call fixes the issue, but introduces other refinement problems. |
|
I've had this issue for years and never reported it because I wasn't aware of how much it affects others. In a custom version we have internally, we have an option for turning off the following lines: - Starting here: https://github.com/OpenFOAM/OpenFOAM-7/blob/master/src/mesh/snappyHexMesh/refinementSurfaces/refinementSurfaces.C#L440 - Ending here: https://github.com/OpenFOAM/OpenFOAM-7/blob/master/src/mesh/snappyHexMesh/refinementSurfaces/refinementSurfaces.C#L448 This has been a custom option for us here at our office, but I've never tested this further to see how this affects other geometries. @niklas.wikstrom: Perhaps by only commenting out the lines I've pointed out, instead of simply commenting 'setMinLevelFields', does it reduce the occurrence of the refinement problems you've seen? Or do the same issue occur? |
|
Yes, It's been around, and I mostly have gotten around it through geometry refinements :-D Thanks a lot! I'll try the modification. /Niklas |
|
It seems to work fine, although I overloaded the setMinLevelFields() in a modified snappyHexMesh.C, thus keeping the src tree intact. (Attaching my local version of snappyHexMesh.C, this is for v5, but minor changes to dev) Thank you again. /N snappyHexMeshM.C (46,072 bytes)
/*---------------------------------------------------------------------------*\ ========= | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox \\ / O peration | \\ / A nd | Copyright (C) 2011-2017 OpenFOAM Foundation \\/ M anipulation | ------------------------------------------------------------------------------- License This file is part of OpenFOAM. OpenFOAM is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. OpenFOAM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>. Application snappyHexMesh Description Automatic split hex mesher. Refines and snaps to surface. \*---------------------------------------------------------------------------*/ #include "argList.H" #include "Time.H" #include "fvMesh.H" #include "snappyRefineDriver.H" #include "snappySnapDriver.H" #include "snappyLayerDriver.H" #include "searchableSurfaces.H" #include "refinementSurfaces.H" #include "refinementFeatures.H" #include "shellSurfaces.H" #include "decompositionMethod.H" #include "noDecomp.H" #include "fvMeshDistribute.H" #include "wallPolyPatch.H" #include "refinementParameters.H" #include "snapParameters.H" #include "layerParameters.H" #include "vtkSetWriter.H" #include "faceSet.H" #include "motionSmoother.H" #include "polyTopoChange.H" #include "cellModeller.H" #include "uindirectPrimitivePatch.H" #include "surfZoneIdentifierList.H" #include "UnsortedMeshedSurface.H" #include "MeshedSurface.H" #include "globalIndex.H" #include "IOmanip.H" #include "fvMeshTools.H" using namespace Foam; // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // // Convert size (as fraction of defaultCellSize) to refinement level label sizeCoeffToRefinement ( const scalar level0Coeff, // ratio of hex cell size v.s. defaultCellSize const scalar sizeCoeff ) { return round(::log(level0Coeff/sizeCoeff)/::log(2)); } autoPtr<refinementSurfaces> createRefinementSurfaces ( const searchableSurfaces& allGeometry, const dictionary& surfacesDict, const dictionary& shapeControlDict, const label gapLevelIncrement, const scalar level0Coeff ) { autoPtr<refinementSurfaces> surfacePtr; // Count number of surfaces. label surfI = 0; forAll(allGeometry.names(), geomI) { const word& geomName = allGeometry.names()[geomI]; if (surfacesDict.found(geomName)) { surfI++; } } labelList surfaces(surfI); wordList names(surfI); PtrList<surfaceZonesInfo> surfZones(surfI); labelList regionOffset(surfI); labelList globalMinLevel(surfI, 0); labelList globalMaxLevel(surfI, 0); labelList globalLevelIncr(surfI, 0); PtrList<dictionary> globalPatchInfo(surfI); List<Map<label>> regionMinLevel(surfI); List<Map<label>> regionMaxLevel(surfI); List<Map<label>> regionLevelIncr(surfI); List<Map<scalar>> regionAngle(surfI); List<Map<autoPtr<dictionary>>> regionPatchInfo(surfI); HashSet<word> unmatchedKeys(surfacesDict.toc()); surfI = 0; forAll(allGeometry.names(), geomI) { const word& geomName = allGeometry.names()[geomI]; const entry* ePtr = surfacesDict.lookupEntryPtr(geomName, false, true); if (ePtr) { const dictionary& shapeDict = ePtr->dict(); unmatchedKeys.erase(ePtr->keyword()); names[surfI] = geomName; surfaces[surfI] = geomI; const searchableSurface& surface = allGeometry[geomI]; // Find the index in shapeControlDict // Invert surfaceCellSize to get the refinementLevel const word scsFuncName = shapeDict.lookup("surfaceCellSizeFunction"); const dictionary& scsDict = shapeDict.optionalSubDict(scsFuncName + "Coeffs"); const scalar surfaceCellSize = readScalar(scsDict.lookup("surfaceCellSizeCoeff")); const label refLevel = sizeCoeffToRefinement ( level0Coeff, surfaceCellSize ); globalMinLevel[surfI] = refLevel; globalMaxLevel[surfI] = refLevel; globalLevelIncr[surfI] = gapLevelIncrement; // Surface zones surfZones.set(surfI, new surfaceZonesInfo(surface, shapeDict)); // Global perpendicular angle if (shapeDict.found("patchInfo")) { globalPatchInfo.set ( surfI, shapeDict.subDict("patchInfo").clone() ); } // Per region override of patchInfo if (shapeDict.found("regions")) { const dictionary& regionsDict = shapeDict.subDict("regions"); const wordList& regionNames = allGeometry[surfaces[surfI]].regions(); forAll(regionNames, regionI) { if (regionsDict.found(regionNames[regionI])) { // Get the dictionary for region const dictionary& regionDict = regionsDict.subDict ( regionNames[regionI] ); if (regionDict.found("patchInfo")) { regionPatchInfo[surfI].insert ( regionI, regionDict.subDict("patchInfo").clone() ); } } } } // Per region override of cellSize if (shapeDict.found("regions")) { const dictionary& shapeControlRegionsDict = shapeDict.subDict("regions"); const wordList& regionNames = allGeometry[surfaces[surfI]].regions(); forAll(regionNames, regionI) { if (shapeControlRegionsDict.found(regionNames[regionI])) { const dictionary& shapeControlRegionDict = shapeControlRegionsDict.subDict ( regionNames[regionI] ); const word scsFuncName = shapeControlRegionDict.lookup ( "surfaceCellSizeFunction" ); const dictionary& scsDict = shapeControlRegionDict.subDict ( scsFuncName + "Coeffs" ); const scalar surfaceCellSize = readScalar ( scsDict.lookup("surfaceCellSizeCoeff") ); const label refLevel = sizeCoeffToRefinement ( level0Coeff, surfaceCellSize ); regionMinLevel[surfI].insert(regionI, refLevel); regionMaxLevel[surfI].insert(regionI, refLevel); regionLevelIncr[surfI].insert(regionI, 0); } } } surfI++; } } // Calculate local to global region offset label nRegions = 0; forAll(surfaces, surfI) { regionOffset[surfI] = nRegions; nRegions += allGeometry[surfaces[surfI]].regions().size(); } // Rework surface specific information into information per global region labelList minLevel(nRegions, 0); labelList maxLevel(nRegions, 0); labelList gapLevel(nRegions, -1); PtrList<dictionary> patchInfo(nRegions); forAll(globalMinLevel, surfI) { label nRegions = allGeometry[surfaces[surfI]].regions().size(); // Initialise to global (i.e. per surface) for (label i = 0; i < nRegions; i++) { label globalRegionI = regionOffset[surfI] + i; minLevel[globalRegionI] = globalMinLevel[surfI]; maxLevel[globalRegionI] = globalMaxLevel[surfI]; gapLevel[globalRegionI] = maxLevel[globalRegionI] + globalLevelIncr[surfI]; if (globalPatchInfo.set(surfI)) { patchInfo.set ( globalRegionI, globalPatchInfo[surfI].clone() ); } } // Overwrite with region specific information forAllConstIter(Map<label>, regionMinLevel[surfI], iter) { label globalRegionI = regionOffset[surfI] + iter.key(); minLevel[globalRegionI] = iter(); maxLevel[globalRegionI] = regionMaxLevel[surfI][iter.key()]; gapLevel[globalRegionI] = maxLevel[globalRegionI] + regionLevelIncr[surfI][iter.key()]; } const Map<autoPtr<dictionary>>& localInfo = regionPatchInfo[surfI]; forAllConstIter(Map<autoPtr<dictionary>>, localInfo, iter) { label globalRegionI = regionOffset[surfI] + iter.key(); patchInfo.set(globalRegionI, iter()().clone()); } } surfacePtr.set ( new refinementSurfaces ( allGeometry, surfaces, names, surfZones, regionOffset, minLevel, maxLevel, gapLevel, scalarField(nRegions, -GREAT), //perpendicularAngle, patchInfo ) ); const refinementSurfaces& rf = surfacePtr(); // Determine maximum region name length label maxLen = 0; forAll(rf.surfaces(), surfI) { label geomI = rf.surfaces()[surfI]; const wordList& regionNames = allGeometry.regionNames()[geomI]; forAll(regionNames, regionI) { maxLen = Foam::max(maxLen, label(regionNames[regionI].size())); } } Info<< setw(maxLen) << "Region" << setw(10) << "Min Level" << setw(10) << "Max Level" << setw(10) << "Gap Level" << nl << setw(maxLen) << "------" << setw(10) << "---------" << setw(10) << "---------" << setw(10) << "---------" << endl; forAll(rf.surfaces(), surfI) { label geomI = rf.surfaces()[surfI]; Info<< rf.names()[surfI] << ':' << nl; const wordList& regionNames = allGeometry.regionNames()[geomI]; forAll(regionNames, regionI) { label globalI = rf.globalRegion(surfI, regionI); Info<< setw(maxLen) << regionNames[regionI] << setw(10) << rf.minLevel()[globalI] << setw(10) << rf.maxLevel()[globalI] << setw(10) << rf.gapLevel()[globalI] << endl; } } return surfacePtr; } void extractSurface ( const polyMesh& mesh, const Time& runTime, const labelHashSet& includePatches, const fileName& outFileName ) { const polyBoundaryMesh& bMesh = mesh.boundaryMesh(); // Collect sizes. Hash on names to handle local-only patches (e.g. // processor patches) HashTable<label> patchSize(1000); label nFaces = 0; forAllConstIter(labelHashSet, includePatches, iter) { const polyPatch& pp = bMesh[iter.key()]; patchSize.insert(pp.name(), pp.size()); nFaces += pp.size(); } Pstream::mapCombineGather(patchSize, plusEqOp<label>()); // Allocate zone/patch for all patches HashTable<label> compactZoneID(1000); forAllConstIter(HashTable<label>, patchSize, iter) { label sz = compactZoneID.size(); compactZoneID.insert(iter.key(), sz); } Pstream::mapCombineScatter(compactZoneID); // Rework HashTable into labelList just for speed of conversion labelList patchToCompactZone(bMesh.size(), -1); forAllConstIter(HashTable<label>, compactZoneID, iter) { label patchi = bMesh.findPatchID(iter.key()); if (patchi != -1) { patchToCompactZone[patchi] = iter(); } } // Collect faces on zones DynamicList<label> faceLabels(nFaces); DynamicList<label> compactZones(nFaces); forAllConstIter(labelHashSet, includePatches, iter) { const polyPatch& pp = bMesh[iter.key()]; forAll(pp, i) { faceLabels.append(pp.start()+i); compactZones.append(patchToCompactZone[pp.index()]); } } // Addressing engine for all faces uindirectPrimitivePatch allBoundary ( UIndirectList<face>(mesh.faces(), faceLabels), mesh.points() ); // Find correspondence to master points labelList pointToGlobal; labelList uniqueMeshPoints; autoPtr<globalIndex> globalNumbers = mesh.globalData().mergePoints ( allBoundary.meshPoints(), allBoundary.meshPointMap(), pointToGlobal, uniqueMeshPoints ); // Gather all unique points on master List<pointField> gatheredPoints(Pstream::nProcs()); gatheredPoints[Pstream::myProcNo()] = pointField ( mesh.points(), uniqueMeshPoints ); Pstream::gatherList(gatheredPoints); // Gather all faces List<faceList> gatheredFaces(Pstream::nProcs()); gatheredFaces[Pstream::myProcNo()] = allBoundary.localFaces(); forAll(gatheredFaces[Pstream::myProcNo()], i) { inplaceRenumber(pointToGlobal, gatheredFaces[Pstream::myProcNo()][i]); } Pstream::gatherList(gatheredFaces); // Gather all ZoneIDs List<labelList> gatheredZones(Pstream::nProcs()); gatheredZones[Pstream::myProcNo()] = compactZones.xfer(); Pstream::gatherList(gatheredZones); // On master combine all points, faces, zones if (Pstream::master()) { pointField allPoints = ListListOps::combine<pointField> ( gatheredPoints, accessOp<pointField>() ); gatheredPoints.clear(); faceList allFaces = ListListOps::combine<faceList> ( gatheredFaces, accessOp<faceList>() ); gatheredFaces.clear(); labelList allZones = ListListOps::combine<labelList> ( gatheredZones, accessOp<labelList>() ); gatheredZones.clear(); // Zones surfZoneIdentifierList surfZones(compactZoneID.size()); forAllConstIter(HashTable<label>, compactZoneID, iter) { surfZones[iter()] = surfZoneIdentifier(iter.key(), iter()); Info<< "surfZone " << iter() << " : " << surfZones[iter()].name() << endl; } UnsortedMeshedSurface<face> unsortedFace ( xferMove(allPoints), xferMove(allFaces), xferMove(allZones), xferMove(surfZones) ); MeshedSurface<face> sortedFace(unsortedFace); fileName globalCasePath ( runTime.processorCase() ? runTime.path()/".."/outFileName : runTime.path()/outFileName ); globalCasePath.clean(); Info<< "Writing merged surface to " << globalCasePath << endl; sortedFace.write(globalCasePath); } } // Check writing tolerance before doing any serious work scalar getMergeDistance(const polyMesh& mesh, const scalar mergeTol) { const boundBox& meshBb = mesh.bounds(); scalar mergeDist = mergeTol * meshBb.mag(); Info<< nl << "Overall mesh bounding box : " << meshBb << nl << "Relative tolerance : " << mergeTol << nl << "Absolute matching distance : " << mergeDist << nl << endl; // check writing tolerance if (mesh.time().writeFormat() == IOstream::ASCII) { const scalar writeTol = std::pow ( scalar(10.0), -scalar(IOstream::defaultPrecision()) ); if (mergeTol < writeTol) { FatalErrorInFunction << "Your current settings specify ASCII writing with " << IOstream::defaultPrecision() << " digits precision." << nl << "Your merging tolerance (" << mergeTol << ") is finer than this." << nl << "Change to binary writeFormat, " << "or increase the writePrecision" << endl << "or adjust the merge tolerance (mergeTol)." << exit(FatalError); } } return mergeDist; } void removeZeroSizedPatches(fvMesh& mesh) { // Remove any zero-sized ones. Assumes // - processor patches are already only there if needed // - all other patches are available on all processors // - but coupled ones might still be needed, even if zero-size // (e.g. processorCyclic) // See also logic in createPatch. const polyBoundaryMesh& pbm = mesh.boundaryMesh(); labelList oldToNew(pbm.size(), -1); label newPatchi = 0; forAll(pbm, patchi) { const polyPatch& pp = pbm[patchi]; if (!isA<processorPolyPatch>(pp)) { if ( isA<coupledPolyPatch>(pp) || returnReduce(pp.size(), sumOp<label>()) ) { // Coupled (and unknown size) or uncoupled and used oldToNew[patchi] = newPatchi++; } } } forAll(pbm, patchi) { const polyPatch& pp = pbm[patchi]; if (isA<processorPolyPatch>(pp)) { oldToNew[patchi] = newPatchi++; } } const label nKeepPatches = newPatchi; // Shuffle unused ones to end if (nKeepPatches != pbm.size()) { Info<< endl << "Removing zero-sized patches:" << endl << incrIndent; forAll(oldToNew, patchi) { if (oldToNew[patchi] == -1) { Info<< indent << pbm[patchi].name() << " type " << pbm[patchi].type() << " at position " << patchi << endl; oldToNew[patchi] = newPatchi++; } } Info<< decrIndent; fvMeshTools::reorderPatches(mesh, oldToNew, nKeepPatches, true); Info<< endl; } } // Write mesh and additional information void writeMesh ( const string& msg, const meshRefinement& meshRefiner, const meshRefinement::debugType debugLevel, const meshRefinement::writeType writeLevel ) { const fvMesh& mesh = meshRefiner.mesh(); meshRefiner.printMeshInfo(debugLevel, msg); Info<< "Writing mesh to time " << meshRefiner.timeName() << endl; meshRefiner.write ( debugLevel, meshRefinement::writeType(writeLevel | meshRefinement::WRITEMESH), mesh.time().path()/meshRefiner.timeName() ); Info<< "Wrote mesh in = " << mesh.time().cpuTimeIncrement() << " s." << endl; } // Precalculate the refinement level for every element of the searchable // surface. // NW: I borrowed this from refinementSurfaces.C in src void setMinLevelFields ( const searchableSurfaces& allGeometry_, const refinementSurfaces& refSurfaces, const shellSurfaces& shells, bool original=false ) { const labelList& surfaces_ = refSurfaces.surfaces(); forAll(surfaces_, surfI) { const searchableSurface& geom = allGeometry_[surfaces_[surfI]]; // Precalculation only makes sense if there are different regions // (so different refinement levels possible) and there are some // elements. Possibly should have 'enough' elements to have fine // enough resolution but for now just make sure we don't catch e.g. // searchableBox (size=6) if (geom.regions().size() > 1 && geom.globalSize() > 10) { // Representative local coordinates and bounding sphere pointField ctrs; scalarField radiusSqr; geom.boundingSpheres(ctrs, radiusSqr); labelList minLevelField(ctrs.size(), -1); { // Get the element index in a roundabout way. Problem is e.g. // distributed surface where local indices differ from global // ones (needed for getRegion call) List<pointIndexHit> info; geom.findNearest(ctrs, radiusSqr, info); // Get per element the region labelList region; geom.getRegion(info, region); // From the region get the surface-wise refinement level forAll(minLevelField, i) { if (info[i].hit()) //Note: should not be necessary { minLevelField[i] = refSurfaces.minLevel(surfI, region[i]); } } } if (original) { // Find out if triangle inside shell with higher level // What level does shell want to refine fc to? Info << "NW: Running snappyHexMesh in original mode" << endl; labelList shellLevel; shells.findHigherLevel(ctrs, minLevelField, shellLevel); forAll(minLevelField, i) { minLevelField[i] = max(minLevelField[i], shellLevel[i]); } } else { Info << "NW: Running snappyHexMesh in modified mode" << endl; } // Store minLevelField on surface const_cast<searchableSurface&>(geom).setField(minLevelField); } } } int main(int argc, char *argv[]) { #include "addOverwriteOption.H" Foam::argList::addBoolOption ( "original", "Run snappyHexMesh in original mode (with setMinLevelFields unmodified)" ); Foam::argList::addBoolOption ( "checkGeometry", "check all surface geometry for quality" ); Foam::argList::addOption ( "surfaceSimplify", "boundBox", "simplify the surface using snappyHexMesh starting from a boundBox" ); Foam::argList::addOption ( "patches", "(patch0 .. patchN)", "only triangulate selected patches (wildcards supported)" ); Foam::argList::addOption ( "outFile", "file", "name of the file to save the simplified surface to" ); #include "addDictOption.H" #include "setRootCase.H" #include "createTime.H" runTime.functionObjects().off(); const bool overwrite = args.optionFound("overwrite"); const bool originalMode = args.optionFound("original"); const bool checkGeometry = args.optionFound("checkGeometry"); const bool surfaceSimplify = args.optionFound("surfaceSimplify"); autoPtr<fvMesh> meshPtr; { Foam::Info << "Create mesh for time = " << runTime.timeName() << Foam::nl << Foam::endl; meshPtr.set ( new fvMesh ( Foam::IOobject ( Foam::fvMesh::defaultRegion, runTime.timeName(), runTime, Foam::IOobject::MUST_READ ) ) ); } fvMesh& mesh = meshPtr(); Info<< "Read mesh in = " << runTime.cpuTimeIncrement() << " s" << endl; // Check patches and faceZones are synchronised mesh.boundaryMesh().checkParallelSync(true); meshRefinement::checkCoupledFaceZones(mesh); // Read meshing dictionary const word dictName("snappyHexMeshDict"); #include "setSystemMeshDictionaryIO.H" const IOdictionary meshDict(dictIO); // all surface geometry const dictionary& geometryDict = meshDict.subDict("geometry"); // refinement parameters const dictionary& refineDict = meshDict.subDict("castellatedMeshControls"); // mesh motion and mesh quality parameters const dictionary& motionDict = meshDict.subDict("meshQualityControls"); // snap-to-surface parameters const dictionary& snapDict = meshDict.subDict("snapControls"); // layer addition parameters const dictionary& layerDict = meshDict.subDict("addLayersControls"); // absolute merge distance const scalar mergeDist = getMergeDistance ( mesh, readScalar(meshDict.lookup("mergeTolerance")) ); const Switch keepPatches(meshDict.lookupOrDefault("keepPatches", false)); // Read decomposePar dictionary dictionary decomposeDict; { if (Pstream::parRun()) { decomposeDict = IOdictionary ( IOobject ( "decomposeParDict", runTime.system(), mesh, IOobject::MUST_READ_IF_MODIFIED, IOobject::NO_WRITE ) ); } else { decomposeDict.add("method", "none"); decomposeDict.add("numberOfSubdomains", 1); } } // Debug // ~~~~~ // Set debug level meshRefinement::debugType debugLevel = meshRefinement::debugType ( meshDict.lookupOrDefault<label> ( "debug", 0 ) ); { wordList flags; if (meshDict.readIfPresent("debugFlags", flags)) { debugLevel = meshRefinement::debugType ( meshRefinement::readFlags ( meshRefinement::IOdebugTypeNames, flags ) ); } } if (debugLevel > 0) { meshRefinement::debug = debugLevel; snappyRefineDriver::debug = debugLevel; snappySnapDriver::debug = debugLevel; snappyLayerDriver::debug = debugLevel; } // Set file writing level { wordList flags; if (meshDict.readIfPresent("writeFlags", flags)) { meshRefinement::writeLevel ( meshRefinement::writeType ( meshRefinement::readFlags ( meshRefinement::IOwriteTypeNames, flags ) ) ); } } // Set output level { wordList flags; if (meshDict.readIfPresent("outputFlags", flags)) { meshRefinement::outputLevel ( meshRefinement::outputType ( meshRefinement::readFlags ( meshRefinement::IOoutputTypeNames, flags ) ) ); } } // Read geometry // ~~~~~~~~~~~~~ searchableSurfaces allGeometry ( IOobject ( "abc", // dummy name mesh.time().constant(), // instance //mesh.time().findInstance("triSurface", word::null),// instance "triSurface", // local mesh.time(), // registry IOobject::MUST_READ, IOobject::NO_WRITE ), geometryDict, meshDict.lookupOrDefault("singleRegionName", true) ); // Read refinement surfaces // ~~~~~~~~~~~~~~~~~~~~~~~~ autoPtr<refinementSurfaces> surfacesPtr; Info<< "Reading refinement surfaces." << endl; if (surfaceSimplify) { IOdictionary foamyHexMeshDict ( IOobject ( "foamyHexMeshDict", runTime.system(), runTime, IOobject::MUST_READ_IF_MODIFIED, IOobject::NO_WRITE ) ); const dictionary& conformationDict = foamyHexMeshDict.subDict("surfaceConformation").subDict ( "geometryToConformTo" ); const dictionary& motionDict = foamyHexMeshDict.subDict("motionControl"); const dictionary& shapeControlDict = motionDict.subDict("shapeControlFunctions"); // Calculate current ratio of hex cells v.s. wanted cell size const scalar defaultCellSize = readScalar(motionDict.lookup("defaultCellSize")); const scalar initialCellSize = ::pow(meshPtr().V()[0], 1.0/3.0); //Info<< "Wanted cell size = " << defaultCellSize << endl; //Info<< "Current cell size = " << initialCellSize << endl; //Info<< "Fraction = " << initialCellSize/defaultCellSize // << endl; surfacesPtr = createRefinementSurfaces ( allGeometry, conformationDict, shapeControlDict, refineDict.lookupOrDefault("gapLevelIncrement", 0), initialCellSize/defaultCellSize ); } else { surfacesPtr.set ( new refinementSurfaces ( allGeometry, refineDict.subDict("refinementSurfaces"), refineDict.lookupOrDefault("gapLevelIncrement", 0) ) ); Info<< "Read refinement surfaces in = " << mesh.time().cpuTimeIncrement() << " s" << nl << endl; } refinementSurfaces& surfaces = surfacesPtr(); // Checking only? if (checkGeometry) { // Extract patchInfo List<wordList> patchTypes(allGeometry.size()); const PtrList<dictionary>& patchInfo = surfaces.patchInfo(); const labelList& surfaceGeometry = surfaces.surfaces(); forAll(surfaceGeometry, surfI) { label geomI = surfaceGeometry[surfI]; const wordList& regNames = allGeometry.regionNames()[geomI]; patchTypes[geomI].setSize(regNames.size()); forAll(regNames, regionI) { label globalRegionI = surfaces.globalRegion(surfI, regionI); if (patchInfo.set(globalRegionI)) { patchTypes[geomI][regionI] = word(patchInfo[globalRegionI].lookup("type")); } else { patchTypes[geomI][regionI] = wallPolyPatch::typeName; } } } // Write some stats allGeometry.writeStats(patchTypes, Info); // Check topology allGeometry.checkTopology(true); // Check geometry allGeometry.checkGeometry ( 100.0, // max size ratio 1e-9, // intersection tolerance autoPtr<writer<scalar>>(new vtkSetWriter<scalar>()), 0.01, // min triangle quality true ); return 0; } // Read refinement shells // ~~~~~~~~~~~~~~~~~~~~~~ Info<< "Reading refinement shells." << endl; shellSurfaces shells ( allGeometry, refineDict.subDict("refinementRegions") ); Info<< "Read refinement shells in = " << mesh.time().cpuTimeIncrement() << " s" << nl << endl; if (originalMode) { surfaces.setMinLevelFields(shells); } else { // Custom setMinLevelFields function overriding the original defined // in refinementSurfaces.H (src). As workaround suggested in issue 3361 // (https://bugs.openfoam.org/view.php?id=3361) setMinLevelFields(allGeometry, surfaces, shells); } Info<< "Checked shell refinement in = " << mesh.time().cpuTimeIncrement() << " s" << nl << endl; // Read feature meshes // ~~~~~~~~~~~~~~~~~~~ Info<< "Reading features." << endl; refinementFeatures features ( mesh, refineDict.lookup("features") ); Info<< "Read features in = " << mesh.time().cpuTimeIncrement() << " s" << nl << endl; // Refinement engine // ~~~~~~~~~~~~~~~~~ Info<< nl << "Determining initial surface intersections" << nl << "-----------------------------------------" << nl << endl; // Main refinement engine meshRefinement meshRefiner ( mesh, mergeDist, // tolerance used in sorting coordinates overwrite, // overwrite mesh files? surfaces, // for surface intersection refinement features, // for feature edges/point based refinement shells // for volume (inside/outside) refinement ); Info<< "Calculated surface intersections in = " << mesh.time().cpuTimeIncrement() << " s" << nl << endl; // Some stats meshRefiner.printMeshInfo(debugLevel, "Initial mesh"); meshRefiner.write ( meshRefinement::debugType(debugLevel&meshRefinement::OBJINTERSECTIONS), meshRefinement::writeType(0), mesh.time().path()/meshRefiner.timeName() ); // Add all the surface regions as patches // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //- Global surface region to patch (non faceZone surface) or patches // (faceZone surfaces) labelList globalToMasterPatch; labelList globalToSlavePatch; { Info<< nl << "Adding patches for surface regions" << nl << "----------------------------------" << nl << endl; // From global region number to mesh patch. globalToMasterPatch.setSize(surfaces.nRegions(), -1); globalToSlavePatch.setSize(surfaces.nRegions(), -1); Info<< setf(ios_base::left) << setw(6) << "Patch" << setw(20) << "Type" << setw(30) << "Region" << nl << setw(6) << "-----" << setw(20) << "----" << setw(30) << "------" << endl; const labelList& surfaceGeometry = surfaces.surfaces(); const PtrList<dictionary>& surfacePatchInfo = surfaces.patchInfo(); forAll(surfaceGeometry, surfI) { label geomI = surfaceGeometry[surfI]; const wordList& regNames = allGeometry.regionNames()[geomI]; Info<< surfaces.names()[surfI] << ':' << nl << nl; if (surfaces.surfZones()[surfI].faceZoneName().empty()) { // 'Normal' surface forAll(regNames, i) { label globalRegionI = surfaces.globalRegion(surfI, i); label patchi; if (surfacePatchInfo.set(globalRegionI)) { patchi = meshRefiner.addMeshedPatch ( regNames[i], surfacePatchInfo[globalRegionI] ); } else { dictionary patchInfo; patchInfo.set("type", wallPolyPatch::typeName); patchi = meshRefiner.addMeshedPatch ( regNames[i], patchInfo ); } Info<< setf(ios_base::left) << setw(6) << patchi << setw(20) << mesh.boundaryMesh()[patchi].type() << setw(30) << regNames[i] << nl; globalToMasterPatch[globalRegionI] = patchi; globalToSlavePatch[globalRegionI] = patchi; } } else { // Zoned surface forAll(regNames, i) { label globalRegionI = surfaces.globalRegion(surfI, i); // Add master side patch { label patchi; if (surfacePatchInfo.set(globalRegionI)) { patchi = meshRefiner.addMeshedPatch ( regNames[i], surfacePatchInfo[globalRegionI] ); } else { dictionary patchInfo; patchInfo.set("type", wallPolyPatch::typeName); patchi = meshRefiner.addMeshedPatch ( regNames[i], patchInfo ); } Info<< setf(ios_base::left) << setw(6) << patchi << setw(20) << mesh.boundaryMesh()[patchi].type() << setw(30) << regNames[i] << nl; globalToMasterPatch[globalRegionI] = patchi; } // Add slave side patch { const word slaveName = regNames[i] + "_slave"; label patchi; if (surfacePatchInfo.set(globalRegionI)) { patchi = meshRefiner.addMeshedPatch ( slaveName, surfacePatchInfo[globalRegionI] ); } else { dictionary patchInfo; patchInfo.set("type", wallPolyPatch::typeName); patchi = meshRefiner.addMeshedPatch ( slaveName, patchInfo ); } Info<< setf(ios_base::left) << setw(6) << patchi << setw(20) << mesh.boundaryMesh()[patchi].type() << setw(30) << slaveName << nl; globalToSlavePatch[globalRegionI] = patchi; } } } Info<< nl; } Info<< "Added patches in = " << mesh.time().cpuTimeIncrement() << " s" << nl << endl; } // Parallel // ~~~~~~~~ // Decomposition autoPtr<decompositionMethod> decomposerPtr ( decompositionMethod::New ( decomposeDict ) ); decompositionMethod& decomposer = decomposerPtr(); if (Pstream::parRun() && !decomposer.parallelAware()) { FatalErrorInFunction << "You have selected decomposition method " << decomposer.typeName << " which is not parallel aware." << endl << "Please select one that is (hierarchical, ptscotch)" << exit(FatalError); } // Mesh distribution engine (uses tolerance to reconstruct meshes) fvMeshDistribute distributor(mesh, mergeDist); // Now do the real work -refinement -snapping -layers // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ const Switch wantRefine(meshDict.lookup("castellatedMesh")); const Switch wantSnap(meshDict.lookup("snap")); const Switch wantLayers(meshDict.lookup("addLayers")); // Refinement parameters const refinementParameters refineParams(refineDict); // Snap parameters const snapParameters snapParams(snapDict); // Layer addition parameters const layerParameters layerParams(layerDict, mesh.boundaryMesh()); if (wantRefine) { cpuTime timer; snappyRefineDriver refineDriver ( meshRefiner, decomposer, distributor, globalToMasterPatch, globalToSlavePatch ); if (!overwrite && !debugLevel) { const_cast<Time&>(mesh.time())++; } refineDriver.doRefine ( refineDict, refineParams, snapParams, refineParams.handleSnapProblems(), motionDict ); if (!keepPatches && !wantSnap && !wantLayers) { removeZeroSizedPatches(mesh); } writeMesh ( "Refined mesh", meshRefiner, debugLevel, meshRefinement::writeLevel() ); Info<< "Mesh refined in = " << timer.cpuTimeIncrement() << " s." << endl; } if (wantSnap) { cpuTime timer; snappySnapDriver snapDriver ( meshRefiner, globalToMasterPatch, globalToSlavePatch ); if (!overwrite && !debugLevel) { const_cast<Time&>(mesh.time())++; } // Use the resolveFeatureAngle from the refinement parameters scalar curvature = refineParams.curvature(); scalar planarAngle = refineParams.planarAngle(); snapDriver.doSnap ( snapDict, motionDict, curvature, planarAngle, snapParams ); if (!keepPatches && !wantLayers) { removeZeroSizedPatches(mesh); } writeMesh ( "Snapped mesh", meshRefiner, debugLevel, meshRefinement::writeLevel() ); Info<< "Mesh snapped in = " << timer.cpuTimeIncrement() << " s." << endl; } if (wantLayers) { cpuTime timer; snappyLayerDriver layerDriver ( meshRefiner, globalToMasterPatch, globalToSlavePatch ); // Use the maxLocalCells from the refinement parameters bool preBalance = returnReduce ( (mesh.nCells() >= refineParams.maxLocalCells()), orOp<bool>() ); if (!overwrite && !debugLevel) { const_cast<Time&>(mesh.time())++; } layerDriver.doLayers ( layerDict, motionDict, layerParams, preBalance, decomposer, distributor ); if (!keepPatches) { removeZeroSizedPatches(mesh); } writeMesh ( "Layer mesh", meshRefiner, debugLevel, meshRefinement::writeLevel() ); Info<< "Layers added in = " << timer.cpuTimeIncrement() << " s." << endl; } { // Check final mesh Info<< "Checking final mesh ..." << endl; faceSet wrongFaces(mesh, "wrongFaces", mesh.nFaces()/100); motionSmoother::checkMesh(false, mesh, motionDict, wrongFaces); const label nErrors = returnReduce ( wrongFaces.size(), sumOp<label>() ); if (nErrors > 0) { Info<< "Finished meshing with " << nErrors << " illegal faces" << " (concave, zero area or negative cell pyramid volume)" << endl; wrongFaces.write(); } else { Info<< "Finished meshing without any errors" << endl; } } if (surfaceSimplify) { const polyBoundaryMesh& bMesh = mesh.boundaryMesh(); labelHashSet includePatches(bMesh.size()); if (args.optionFound("patches")) { includePatches = bMesh.patchSet ( wordReList(args.optionLookup("patches")()) ); } else { forAll(bMesh, patchi) { const polyPatch& patch = bMesh[patchi]; if (!isA<processorPolyPatch>(patch)) { includePatches.insert(patchi); } } } fileName outFileName ( args.optionLookupOrDefault<fileName> ( "outFile", "constant/triSurface/simplifiedSurface.stl" ) ); extractSurface ( mesh, runTime, includePatches, outFileName ); pointIOField cellCentres ( IOobject ( "internalCellCentres", runTime.timeName(), mesh, IOobject::NO_READ, IOobject::AUTO_WRITE ), mesh.cellCentres() ); cellCentres.write(); } Info<< "Finished meshing in = " << runTime.elapsedCpuTime() << " s." << endl; Info<< "End\n" << endl; return 0; } // ************************************************************************* // |
|
This change could be put on an optional switch, what do you think would be an appropriate name for it? A better option would be to change the refinement approach so that it isn't so sensitive to the aspect ratio of the triangles (I am assuming the issue is with long thin triangles), any thoughts? |
|
@henry: We defined the switch as 'triSurfacesLevelsOnly' in the 'castellatedMeshControls' dict. I have the very vague idea that the problem occurs when the triangle was large/long enough to overlap multiple refinement regions, resulting in being affected by the other refinement definitions. From what I can remember, there was no clear fix at the time when I saw and tried to fix this issue, only turning off that particular detection would do the trick. |
|
Good with a switch in the dictionary. Note, however, that the facet in my example above does not have a large aspect ratio; just three or so, but the facet gets refined as soon as the distance ref reaches across its edge(s?). The switch name 'triSurfacesLevelsOnly' might need some explanation ;-) It would be interesting to know when the feature is needed, but I guess I'll figure it out if I run without it for a while. Thanks for helping! |
|
@niklas.wikstrom Do you have a better name in mind? Is this feature needed at all? Shall I simply remove it? If I add a switch what should the default be? If it is true how will people know that they need to set it to false? |
|
Very vague suggestions below, since I do not know what else the thing affects. Name: shellLevelToFacets Default: false Will show faster if it's needed or not, set the default to false and wait for bug reports... Needed? I do not know, but I assume there was a good reason for it. Better keep it. |
|
The problem is that if it is needed and was put in for good reason it should be on by default for backward compatibility unless we know for sure that it actually generally not needed and then we need to know for what cases it is needed so that we can inform people of the change in default behavior. |
|
I realise that. It is safer. I will run without the thing and if I find some inconsistency or find out what it is all about, I will report back. /N |
|
Is there any progress and/or recommendations on this or should I close the report? |
|
Hello! I have not seen any issues with the feature turned off, but have not much statistics. However, I think it is a good idea to add the option "shellLayerToFaces" with default value "true" to snappyHexMesh. And then close the issue. |
|
How should I document this change? Can any particular recommendations be made? When should users consider changing this switch? If we don't say anything it will just be a hidden switch that no one knows about or knows how to use so there wouldn't be much point having it. |
|
If I'm not mistaken, the description can be along these lines: This option will allow tessellated surfaces (shells?) to inherit refinement levels from other overlapping features. If you see unwanted refinement spreading onto triangles, turn off this option. As a side note, I very vaguely remember seeing this happen when larger triangles were overlapping smaller features that I wanted to have refined and yet the large triangle centres would get caught on the overlapping heuristic. |
|
It is not clear for what cases this refinement condition is useful and given the difficulty in choosing a suitable name for the option, how to control it, what the default should be or when to recommend it should be switched on or off it makes sense to remove it for now and reinstate under an option if cases are found and reported which benefit from this option. commit a64d71929f0a51229ef6c5603af1d1ca21dad5f1 Author: Henry Weller <http://openfoam.org> Date: Thu Nov 14 11:51:13 2019 +0000 snappyHexMesh::refinementSurfaces: Removed problematic shell refinement transfer to surface // Find out if triangle inside shell with higher level // What level does shell want to refine fc to? // // Note: it is not clear for what cases this additional requirement // is beneficial but for triangulated surfaces with triangles that // span refinement regions it introduces unnecessary refinement so // it has been removed. // // This option can be reinstated under a switch if cases are // provided which demonstrate the benefit. /* labelList shellLevel; shells.findHigherLevel(ctrs, minLevelField, shellLevel); forAll(minLevelField, i) { minLevelField[i] = max(minLevelField[i], shellLevel[i]); } */ |
|
commit 5d93da3aed0d03b38081717e29cfd506eba294a9 (HEAD -> master, origin/master, origin/HEAD) Author: Henry Weller <http://cfd.direct> Date: Tue Dec 7 12:17:52 2021 +0000 snappyHexMesh: Added castellatedMeshControls:extendedRefinementSpan option The code relating to extending refinement to the span of the facet/triangles intersected by the refinement distance referred to in report https://bugs.openfoam.org/view.php?id=3361 and temporarily removed may now be selected by the optional castellatedMeshControls:extendedRefinementSpan entry in snappyHexMeshDict. It in not clear if this control is generally beneficial and very few users have reported a preference and too few example cases have been provided to make a balanced judgement so it has been decided to reinstate the previous default behaviour and default extendedRefinementSpan to true. |
Date Modified | Username | Field | Change |
---|---|---|---|
2019-10-07 11:12 | niklas.wikstrom | New Issue | |
2019-10-07 11:12 | niklas.wikstrom | File Added: geomFacetRef.png | |
2019-10-07 11:12 | niklas.wikstrom | File Added: snappyBugReportRoom.tgz | |
2019-10-07 11:14 | niklas.wikstrom | Note Added: 0010802 | |
2019-10-07 16:23 | wyldckat | Note Added: 0010803 | |
2019-10-08 05:18 | niklas.wikstrom | Note Added: 0010804 | |
2019-10-08 09:08 | niklas.wikstrom | File Added: snappyHexMeshM.C | |
2019-10-08 09:08 | niklas.wikstrom | Note Added: 0010805 | |
2019-10-08 14:40 | henry | Note Added: 0010806 | |
2019-10-08 15:38 | wyldckat | Note Added: 0010808 | |
2019-10-08 15:54 | niklas.wikstrom | Note Added: 0010809 | |
2019-10-08 16:06 | henry | Note Added: 0010810 | |
2019-10-08 16:57 | niklas.wikstrom | Note Added: 0010814 | |
2019-10-08 17:25 | henry | Note Added: 0010816 | |
2019-10-08 20:37 | niklas.wikstrom | Note Added: 0010818 | |
2019-11-12 16:10 | henry | Note Added: 0010888 | |
2019-11-13 14:34 | niklas.wikstrom | Note Added: 0010895 | |
2019-11-13 14:39 | henry | Note Added: 0010896 | |
2019-11-13 15:17 | wyldckat | Note Added: 0010897 | |
2019-11-14 12:03 | henry | Assigned To | => henry |
2019-11-14 12:03 | henry | Status | new => resolved |
2019-11-14 12:03 | henry | Resolution | open => fixed |
2019-11-14 12:03 | henry | Note Added: 0010901 | |
2021-12-07 12:25 | henry | Note Added: 0012291 |