abarret / multiphase-stokes Goto Github PK
View Code? Open in Web Editor NEWSolver a mixture of fluids based on IBAMR
Solver a mixture of fluids based on IBAMR
I want to be able to specify a spatially (and temporally) varying drag coefficient. This can be set via a CartGridFunction
for xi
in the hierarchy integrator, but this information needs to be propagated further along the line to the operators and preconditioner. We could make VCTwoPhaseStaggeredStokesOperator
take flag to specify either variable or constant drag coefficients and then have an if
statement in the apply function. This seems not ideal, as what happens if we want to, e.g. specify variable viscosity later as well? An alternative is to have a separate operator for each type of variable coefficient.
We first need to break down the operators and preconditioner to smaller chunks that are easier to replace.
When running the target time_stepping
or fluid_multigrid
one more than one processor, with USE_PRECOND = TRUE
, the relative residual decreases very, very slowly. The solver doesn't converge to the given rtol within the prescribed ksp_max_it. The issue may be with the grid generation for the coarser levels with the multigrid solver.
Running on multiple processors without preconditioning doesn't affect the performance.
At some point, we will need to implement a 3D solver for these equations. There are a few things that will need to be updated. The main issue is the preconditioner probably won't scale to 3D. We need to switch away from cell-wise Vanka smoothers to larger box Vanka smoothers.
As noted in #61, we should make sure that all simulations that we use in the manuscript have matching input files in the repository.
Currently, we are employing bi-periodic boundary conditions. We should provide the ability to implement physical boundary conditions as well. We need to figure out how to set up these up in the solver and smoother.
The solver works great on uniform grids. With AMR, especially deep hierarchies or levels that have oddly shaped refined regions, the solver will sometimes stagnate. This doesn't appear to be a problem with overlapping patches.
My guess is our treatment of coarse-fine interfaces needs to be changed. This may be related to the discretization at CF interfaces being non SPD. If this is the case, I would expect dropping the time step size to result in better performance. Alternatively, we may be prolonging, restricting, and filling ghost cells incorrectly in the preconditioner at CF interfaces.
We currently hit a lot of compiler warnings that we should fix. @bindi-nagda, do you want to tackle this one next? Compile with the flags -Wall
.
The fluid solver when the volume fraction is advected is only first order accurate.
There are two problems:
I have made some progress in fixing these issues. I can obtain second order accuracy in time, but it requires solving for the velocity twice. We may need to do a multi-step scheme for the advection solver. I will try to push code for this tomorrow.
We need to be more careful about how we compute the right hand side for time stepping. Currently, the pressure and divergence terms are included when computing the RHS. We can not use strictly the same operator for computing the RHS as we do for computing the solution of the system.
@bindi-nagda, take a look at the branch fix_rhs_timestepping
. I've been able to get first order accuracy in time. Try it out with the test problems that you have and see if you can also get first order accuracy. For some reason, I still can't get second order accuracy.
We want to port the numerical routines in the staggered stokes operator to Fortran for improved efficiency. This is not urgent at the moment, but I wanted to have this task on our radar.
Line 281 in dd02256
I'd like to take a stab at replacing the box relaxation smoother we currently have with a PETSc smoother. This would be beneficial for a few reasons:
a) The box relaxation solve takes the largest portion of time (~90%) during the fluid solve, and a PETSc smoother would yield significant time savings. This will also enable us to scale to 3D (Issue #39).
b) Physical Boundary conditions could be baked into the PETSc matrix instead of having to use switch cases in the current box relaxation implementation to handle system size changes due to physical BCs (Issue #56).
multiphase-stokes/multigrid.cpp
Line 285 in 93e25c5
@abarret The compilation of multigrid.cpp (fluid solver with multigrid preconditioner) is successful. The build target is called fluid_multigrid. But there is an error when I run the executable with input2d_five as the input file. The error message is shown below:
P=00000:Program abort called in file ``/udrive/student/bnagda2015/AMR/sfw/samrai/2.4.4/linux-g++-debug/include/RefineClasses.C'' at line 214
P=00000:ERROR MESSAGE:
P=00000:Bad data given to RefineClasses...
P=00000:It is not a valid operation to copy from "Source" patch data
P=00000:un_sc##context-clone_of_id=0002 to "Scratch" patch data KrylovPrecondStrategy::p_scr##context
P=00000:
Aborted (core dumped)
Could you take a look at multigrid.cpp and VCTwoFluidStaggeredStokesBoxRelaxationFACOperator.cpp? I think the error might reside within the InitializeOperatorState routine of the FACOperator.
After regridding, the interpolated velocities will in general no longer satisfy the co-incompressibiloty condition. While this does not appear to affect the order of accuracy, we should do something about this.
One possibility is to project the volume averaged velocity onto an incompressible field. However, it's unclear what the correct phase velocities should be in this case.
An alternative is to use divergence preserving interpolation schemes. This would minimize cost and hopefully be just as effective. See https://doi.org/10.1016/j.jcp.2022.111500
For whatever reason, performance with AMR appears to be seriously degraded with anything other than V-cycles. At this point, we should only use V-cycles with the preconditioner.
It seems the solver works decently in #44, so in the interest of progress, we should proceed with what works. However, I don't understand why the changes that were implemented improve the solver, so it would be good to fully understand what the changes are doing. I'll summarize questions here.
The main changes are the introduction of different synchronizations inside the VCTwoFluidStaggeredStokesOperator::apply()
function. There are three additional synchronizations, controlled by preprocessor flags.
USE_DIV
synchronizes coarse-fine interfaces of the side centered quantity (ths*us + thn*un
) before computing the divergence of the velocities. This seems to be necessary for solver convergence at coarse-fine interfaces.USE_SYNCHED_INTERP
uses a synchronized node and side interpolated version of the volume fractions to compute derivatives in the momentum equations. I don't think this is necessary, but I'm leaving it in for now.POST_SYNCH
performs an extra synchronization step at the end of the apply()
function. Note this synchronization is not related to the coarse-fine interfaces. It is synchronizing the coarse level with the fine level. This also seems to be necessary for performance on L-shaped domains.Case 3. above is my primary source of confusion. It makes sense to me that the underlying coarse level should be synchronized with the fine level, however, from what I can tell, no other solver in IBAMR performs this synchronization. We should investigate what change this is actually performing, and whether a similar step is occurring in IBAMR.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.