Git Product home page Git Product logo

multiphase-stokes's People

Contributors

abarret avatar bindi-nagda avatar

Stargazers

 avatar  avatar

Watchers

 avatar

multiphase-stokes's Issues

Variable coefficients

I want to be able to specify a spatially (and temporally) varying drag coefficient. This can be set via a CartGridFunction for xi in the hierarchy integrator, but this information needs to be propagated further along the line to the operators and preconditioner. We could make VCTwoPhaseStaggeredStokesOperator take flag to specify either variable or constant drag coefficients and then have an if statement in the apply function. This seems not ideal, as what happens if we want to, e.g. specify variable viscosity later as well? An alternative is to have a separate operator for each type of variable coefficient.

We first need to break down the operators and preconditioner to smaller chunks that are easier to replace.

Performance degradation when running in parallel

When running the target time_stepping or fluid_multigrid one more than one processor, with USE_PRECOND = TRUE, the relative residual decreases very, very slowly. The solver doesn't converge to the given rtol within the prescribed ksp_max_it. The issue may be with the grid generation for the coarser levels with the multigrid solver.

Running on multiple processors without preconditioning doesn't affect the performance.

3D implementation

At some point, we will need to implement a 3D solver for these equations. There are a few things that will need to be updated. The main issue is the preconditioner probably won't scale to 3D. We need to switch away from cell-wise Vanka smoothers to larger box Vanka smoothers.

  • 3D implementation of the operator
  • Construct PETSc matrices for the smoothing operators (use e.g. ILU)
  • Use a PETSc solver for the coarsest level solve.

Implement physical boundary conditions

Currently, we are employing bi-periodic boundary conditions. We should provide the ability to implement physical boundary conditions as well. We need to figure out how to set up these up in the solver and smoother.

Solver performance significantly degraded with AMR

The solver works great on uniform grids. With AMR, especially deep hierarchies or levels that have oddly shaped refined regions, the solver will sometimes stagnate. This doesn't appear to be a problem with overlapping patches.

My guess is our treatment of coarse-fine interfaces needs to be changed. This may be related to the discretization at CF interfaces being non SPD. If this is the case, I would expect dropping the time step size to result in better performance. Alternatively, we may be prolonging, restricting, and filling ghost cells incorrectly in the preconditioner at CF interfaces.

Fix compiler warnings

We currently hit a lot of compiler warnings that we should fix. @bindi-nagda, do you want to tackle this one next? Compile with the flags -Wall.

2nd order time stepping

The fluid solver when the volume fraction is advected is only first order accurate.

There are two problems:

  1. The advection diffusion solver only iterates one time, and we're using a forward Euler scheme. This means that it can only be first order accurate. We need to make sure the solver can use something like trapezoidal rule.
  2. The fluid solver only does a single step of trapezoidal rule with an initial approximation of the volume fraction at time $t_{n+1}$. I think this is also resulting in first order accuracy.

I have made some progress in fixing these issues. I can obtain second order accuracy in time, but it requires solving for the velocity twice. We may need to do a multi-step scheme for the advection solver. I will try to push code for this tomorrow.

2nd order time stepping

We need to be more careful about how we compute the right hand side for time stepping. Currently, the pressure and divergence terms are included when computing the RHS. We can not use strictly the same operator for computing the RHS as we do for computing the solution of the system.

@bindi-nagda, take a look at the branch fix_rhs_timestepping. I've been able to get first order accuracy in time. Try it out with the test problems that you have and see if you can also get first order accuracy. For some reason, I still can't get second order accuracy.

Fortran routines for momentum equations

We want to port the numerical routines in the staggered stokes operator to Fortran for improved efficiency. This is not urgent at the moment, but I wanted to have this task on our radar.

Construct PETSc matrices for the smoother

I'd like to take a stab at replacing the box relaxation smoother we currently have with a PETSc smoother. This would be beneficial for a few reasons:

a) The box relaxation solve takes the largest portion of time (~90%) during the fluid solve, and a PETSc smoother would yield significant time savings. This will also enable us to scale to 3D (Issue #39).

b) Physical Boundary conditions could be baked into the PETSc matrix instead of having to use switch cases in the current box relaxation implementation to handle system size changes due to physical BCs (Issue #56).

Error in BoxRelaxationFACOperator

// Now create a preconditioner

@abarret The compilation of multigrid.cpp (fluid solver with multigrid preconditioner) is successful. The build target is called fluid_multigrid. But there is an error when I run the executable with input2d_five as the input file. The error message is shown below:

P=00000:Program abort called in file ``/udrive/student/bnagda2015/AMR/sfw/samrai/2.4.4/linux-g++-debug/include/RefineClasses.C'' at line 214
P=00000:ERROR MESSAGE:
P=00000:Bad data given to RefineClasses...
P=00000:It is not a valid operation to copy from "Source" patch data
P=00000:un_sc##context-clone_of_id=0002 to "Scratch" patch data KrylovPrecondStrategy::p_scr##context
P=00000:
Aborted (core dumped)

Could you take a look at multigrid.cpp and VCTwoFluidStaggeredStokesBoxRelaxationFACOperator.cpp? I think the error might reside within the InitializeOperatorState routine of the FACOperator.

Regarding projections

After regridding, the interpolated velocities will in general no longer satisfy the co-incompressibiloty condition. While this does not appear to affect the order of accuracy, we should do something about this.

One possibility is to project the volume averaged velocity onto an incompressible field. However, it's unclear what the correct phase velocities should be in this case.

An alternative is to use divergence preserving interpolation schemes. This would minimize cost and hopefully be just as effective. See https://doi.org/10.1016/j.jcp.2022.111500

Do not use F-cycles

For whatever reason, performance with AMR appears to be seriously degraded with anything other than V-cycles. At this point, we should only use V-cycles with the preconditioner.

Robustness of solver

It seems the solver works decently in #44, so in the interest of progress, we should proceed with what works. However, I don't understand why the changes that were implemented improve the solver, so it would be good to fully understand what the changes are doing. I'll summarize questions here.

The main changes are the introduction of different synchronizations inside the VCTwoFluidStaggeredStokesOperator::apply() function. There are three additional synchronizations, controlled by preprocessor flags.

  1. USE_DIV synchronizes coarse-fine interfaces of the side centered quantity (ths*us + thn*un) before computing the divergence of the velocities. This seems to be necessary for solver convergence at coarse-fine interfaces.
  2. USE_SYNCHED_INTERP uses a synchronized node and side interpolated version of the volume fractions to compute derivatives in the momentum equations. I don't think this is necessary, but I'm leaving it in for now.
  3. POST_SYNCH performs an extra synchronization step at the end of the apply() function. Note this synchronization is not related to the coarse-fine interfaces. It is synchronizing the coarse level with the fine level. This also seems to be necessary for performance on L-shaped domains.

Case 3. above is my primary source of confusion. It makes sense to me that the underlying coarse level should be synchronized with the fine level, however, from what I can tell, no other solver in IBAMR performs this synchronization. We should investigate what change this is actually performing, and whether a similar step is occurring in IBAMR.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.