Git Product home page Git Product logo

pelec's Introduction

PeleC: An adaptive mesh refinement solver for compressible reacting flows

Documentation | Nightly Test Results | PeleC Citation | Pele Citation

Getting Started

To compile and run PeleC, one needs a C++ compiler that supports the C++17 standard. A hierarchical strategy for parallelism is supported, based on MPI, MPI + OpenMP, or MPI + GPU (CUDA/HIP/DPC++). The code should work with all major MPI and OpenMP implementations. PeleC should build and run with no modifications to the make system if using a Linux system with the GNU compilers, version 7 and above. CMake, although used mostly for testing, is also an option for building the code.

To build PeleC (using the default submodules for AMReX, PelePhysics, and SUNDIALS) and run a sample 3D flame problem:

git clone --recursive [email protected]:AMReX-Combustion/PeleC.git
cd PeleC/Exec/RegTests/PMF
make TPLrealclean && make realclean && make TPL && make -j
./Pele3d.xxx.yyy.ex example.inp
  1. In the exec line above, xxx.yyy is a tag identifying your compiler and various build options, and will vary across pltaform. (Note that GNU compilers must be at least version 7, and MPI should be at least of standard version 3).
  2. The example is a 3D premixed flame, flowing vertically upward through the domain with no gravity. The lateral boundaries are periodic. A detailed hydrogen model is used. The solution is initialized with a wrinkled (perturbed) 2D steady flame solution computed using the PREMIX code. Two levels of solution-adaptive refinement are automatically triggered by the presence of the flame intermediate, HO2.
  3. In addition to informative output to the terminal, periodic plotfiles are written in the run folder. These may be viewed with AMReX's Amrvis, VisIt, or ParaView:
    1. In VisIt, direct the File->Open dialogue to select the file named "Header" that is inside each plotfile folder.
    2. In ParaViuew, navigate to the case directory, open the plotfile folder.
    3. With Amrvis, $ amrvis3d plt00030, for example.

Dependencies

PeleC is built on the AMReX and PelePhysics libraries. PeleC also requires the SUNDIALS ODE solver library.

Development model

To add a new feature to PeleC, the procedure is:

  1. Create a branch for the new feature (locally):

    git checkout -b AmazingNewFeature
    
  2. Develop the feature, merging changes often from the development branch into your AmazingNewFeature branch:

    git commit -m "Developed AmazingNewFeature"
    git checkout development
    git pull                      # fix any identified conflicts between local and remote branches of "development"
    git checkout AmazingNewFeature
    git rebase development        # fix any identified conflicts between "development" and "AmazingNewFeature"
    
  3. Build and run

    1. Build and run the full test suite using CMake and CTest (See the Build directory for an example script). Please do not introduce warnings. PeleC is checked against clang-tidy and cppcheck in the CI. To use cppcheck and clang-tidy locally use these CMake options:

      -DPELE_ENABLE_CLANG_TIDY:BOOL=ON
      -DPELE_ENABLE_CPPCHECK:BOOL=ON
      
    2. Run clang-tidy by using an LLVM compiler and making sure clang-tidy is found during configure. Then make will run clang-tidy along with compilation. Once verifying cppcheck was found during configure, using the make cppcheck target should run its checks on the compile_commands.json database generated by CMake. More information on these checks can be seen in the CI files used for GitHub Actions in the .github/workflows directory.

    3. To easily format all source files before commit, use the following command:

      find Source Exec \( -name "*.cpp" -o -name "*.H" \) -exec clang-format -i {} +
      
  4. If you don't already have a fork of the PeleC repository, follow the Github instructions to create one. Then, push a feature branch to your forked PeleC repository:

    git remote add remotename [email protected]:remoteurl # add a remote pointing to the user's fork
    git push -u remotename AmazingNewFeature # Note: -u option required only for the first push of new branch
    
  5. Submit a pull request through [email protected]:AMReX-Combustion/PeleC.git, and make sure you are requesting a merge against the development branch

  6. Check the CI status on Github and make sure the tests passed for merge request

Note

Github CI uses the CMake build system and CTest to test the core source files of PeleC. If you are adding source files, you will need to add them to the list of source files in the CMake directory for the tests to pass. Make sure to add them to the GNU make makefiles as well.

Test Status

Nightly test results for PeleC against multiple compilers and machines can be seen on its CDash page.

Documentation

The full documentation for Pele exists in the Docs directory; at present this is maintained inline using Sphinx Sphinx. With Sphinx, documentation is written in Restructured Text. reST is a markup language similar to Markdown, but with somewhat greater capabilities (and idiosyncrasies). There are several primers available to get started. One gotcha is that indentation matters. To build

$ cd Docs && mkdir build && cd build && sphinx-build -M html ../sphinx .

Citation

To cite the PeleC software and refer to its computational performance, use the following journal articles for PeleC and the Pele software suite:

@article{PeleC_IJHPCA,
  author = {Marc T {Henry de Frahan} and Jon S Rood and Marc S Day and Hariswaran Sitaraman and Shashank Yellapantula and Bruce A Perry and Ray W Grout and Ann Almgren and Weiqun Zhang and John B Bell and Jacqueline H Chen},
  title = {{PeleC: An adaptive mesh refinement solver for compressible reacting flows}},
  journal = {The International Journal of High Performance Computing Applications},
  volume = {37},
  number = {2},
  pages = {115-131},
  year = {2022},
  doi = {10.1177/10943420221121151},
  url = {https://doi.org/10.1177/10943420221121151}
}

@article{PeleSoftware,
  author = {Marc T. {Henry de Frahan} and Lucas Esclapez and Jon Rood and Nicholas T. Wimer and Paul Mullowney and Bruce A. Perry and Landon Owen and Hariswaran Sitaraman and Shashank Yellapantula and Malik Hassanaly and Mohammad J. Rahimi and Michael J. Martin and Olga A. Doronina and Sreejith N. A. and Martin Rieth and Wenjun Ge and Ramanan Sankaran and Ann S. Almgren and Weiqun Zhang and John B. Bell and Ray Grout and Marc S. Day and Jacqueline H. Chen},
  title = {The Pele Simulation Suite for Reacting Flows at Exascale},
  booktitle = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
  journal = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
  chapter = {},
  pages = {13-25},
  doi = {10.1137/1.9781611977967.2},
  URL = {https://epubs.siam.org/doi/abs/10.1137/1.9781611977967.2},
  eprint = {https://epubs.siam.org/doi/pdf/10.1137/1.9781611977967.2},
  year = {2024},
  publisher = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing}
}

Additionally, to cite the application of PeleC to compressible reacting flows, use the following Combustion and Flame journal article:

@article{Sitaraman2021,
  author = {Hariswaran Sitaraman and Shashank Yellapantula and Marc T. {Henry de Frahan} and Bruce Perry and Jon Rood and Ray Grout and Marc Day},
  title = {Adaptive mesh based combustion simulations of direct fuel injection effects in a supersonic cavity flame-holder},
  journal = {Combustion and Flame},
  volume = {232},
  pages = {111531},
  year = {2021},
  issn = {0010-2180},
  doi = {https://doi.org/10.1016/j.combustflame.2021.111531},
  url = {https://www.sciencedirect.com/science/article/pii/S0010218021002741},
}

Acknowledgment

This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations -- the Office of Science and the National Nuclear Security Administration -- responsible for the planning and preparation of a capable exascale ecosystem -- including software, applications, hardware, advanced system engineering, and early testbed platforms -- to support the nation's exascale computing imperative.

pelec's People

Contributors

asmunder avatar baperry2 avatar bssoriano avatar ccse avatar dependabot[bot] avatar drummerdoc avatar ejyoo921 avatar emotheau avatar ennadelfen avatar esclapez avatar gardner48 avatar hariswaran avatar hnkolla avatar hsitaram avatar jhoncordova avatar jrood-nrel avatar ldowen avatar marchdf avatar marient avatar paroomk avatar rgrout avatar shashanknrel avatar sreejithnrel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pelec's Issues

Failed to build PeleC due to sundials

Hi, I checked out the latest PeleC and tried to build the PMF example. However, I encountered the following problem.

/home/kz21/Codes/PeleC/Submodules/sundials/include/sundials/sundials_types.h:53:10: fatal error: sundials/sundials_config.h: No such file or directory                
 #include <sundials/sundials_config.h>  

I wonder if sundials has to be build before any other dependent modules? FYI, an older commit 8f248555c1e7d809597049a18d024c4045c5d1c9 works for me.

Possibility of doing a U-turn case with EB

I am looking at running a case in PeleC (or maybe PeleLM) that is essentially a U-turn, as illustrated below. The sketch is a 2D slice, this shape would be extruded in the direction normal to the screen.

bilde

Is this possible? I don't see any challenges in creating the geometry with EB per se, but I'm not sure if it's possible to get the blue wall to be an inlet and the red wall to be an outlet? (A backup plan would be to use the green wall as the outlet, using a couple more EB blocks, but then we need to think about whether this changes the case too much for us.)

unable to access 'https://github.com/LLNL/sundials/'

When I am trying make, it goes wrong with a NOTE---"fatal: unable to access 'https://github.com/LLNL/sundials/': SSL: certificate subject name (*.googlevideo.com) does not match target host name 'github.com'
make: *** [GNUmakefile:228: /soft/PELEC/Submodules/PelePhysics/ThirdParty/BUILD/SUNDIALS/src/include/sundials/sundials_config.in] Error 128." And both in PeleC and PeleM, I get troubles to make sundials component work well. Should I cmake it first or just run make TPL?

Sundials include directory is not correct when compiling with gnu make

If I follow the following compilation method described in ReadMe

git clone --recursive [email protected]:AMReX-Combustion/PeleC.git
cd PeleC/Exec/RegTests/PMF
make TPLrealclean && make realclean && make TPL && make -j

I got following error

../../../Submodules/AMReX/Src/Extern/SUNDIALS/AMReX_Sundials_Core.H:6:10: fatal error: sundials/sundials_context.h: No such file or directory
    6 | #include <sundials/sundials_context.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

And I found the include directory is not correct in the compilation command

-I/home/lainme/code/amrex/PeleC/Exec/RegTests/PMF/'/home/lainme/code/amrex/PeleC/Submodules/PelePhysics/ThirdParty'/lib/../include 

Issues compiling using HYP_TYPE = MOL

I have a test case I am running. In 2d, I was able to get it to compile and run when I used HYP_TYPE = PPM. However, when I use HYP_TYPE = MOL, it fails to compile. I tried a series of configurations with HYP_TYPE = MOL with varying results:

  1. For DIM = 2, USE_EB = FALSE, it fails to compile with the error that the amrex_ebcellflag_module can't be found from the slope_mod_2d_EB.f90 file.

  2. For DIM = 2, USE_EB = TRUE, it fails to compile with the error that in Src_2d/trace_d2.f90, the uslope, pslope, and multid_slope symbols cannot be found in the slope_module.

  3. For DIM = 3, USE_EB = FALSE, it again fails to find the amrex_ebcellflag_module.

  4. For DIM = 3, USE_EB = TRUE, it compiles.

Variable boundary conditions on embedded boundaries

Opening an issue to discuss inhomogeneous boundary conditions on the embedded boundaries, as suggested by @drummerdoc .
This might be implemented in PeleLM by @esclapez first (AMReX-Combustion/PeleLM#157) and then taken up into PeleC.

In principle there are two cases - one that is simple to implement, one that is more general:

  1. varying wall temperature, possibly Neumann on some regions.
  2. ability to set all components of the state on different regions. You could have an inlet jet on this part, varying temperature on this other part, injecting fuel over here, etc. In principle arbitrarily complicated.

For case 1 I have something that seems to be working, which I'll describe in the next comment.
For case 2 I will leave it up for discussion.

[Emmanuel/NSCBC_time_varying] issues when compiling HIT_forced

I tried to compile the HIT_forced case on the Emmanuel/NSCBC_time_varying branch (this case also exists on other branches but the problem is the same) when I perform make -j4, an error appears telling me that the bl_constants_module.mod module is missing this module is called in forcing_src_nd.F90, same for most other modules called in forcing_src_nd.F90 or Prob_nd.F90 (as parallel for example). They are missing and not found. What to do, where to find these modules, how to create them ? I really need to simulate a forced HIT case. Thanks in advance

Compilation error - development branch

The current development branch of PeleC is giving the following compilation error.

To reproduce the error - compile the PMF case with Sundials and CVODE turned on

(make -j8 TPLrealclean; make -j8 USE_SUNDIALS_PP=TRUE USE_CVODE_PP=TRUE TPL; make -j8 realclean; make -j8 USE_SUNDIALS_PP=TRUE USE_CVODE_PP=TRUE)

../../../Source/PeleC.cpp:32:10: fatal error: reactor.H: No such file or directory #include "reactor.H" ^~~~~~~~~~~ compilation terminated.

A question about function "pc_fill_bndry_grad_stencil_quadratic()" in EB.cpp

Hey Everyone
In pc_fill_bndry_grad_stencil_quadratic() function

      amrex::Real y[2] = {
        b[1] + (x[0] - b[0]) * amrex::Math::abs(n[c[1]] / n[c[0]]),
        b[1] + (x[1] - b[0]) * amrex::Math::abs(n[c[1]] / n[c[0]])};

Why is this n[c[1]] / n[c[0]? Refer Johansen's paper, this point is perpendicular to the EB boundary. I think this code works out along the EB boundary. I don't know if I understand something wrong. I would appreciate it if you could answer.

Tutorials result "plt" could not view in VisIt or Paraview

When I run the program in the tutorial,like EB_Channel.I could't visualize my data

  1. Open "header" in VisIt .When I try to draw the "Pseudocolor"
    the error message is
    "The compute engine running on localhost has exited abnormally.
    Shortly thereafter, the following occured...
    "
    when I draw the mesh as follow as figure
    image

the message in concle is
Segfault !!!
/usr/bin/addr2line: '/home/PeleC/Exec/Tutorials/EB_Channel/plt00010/visit': No such file

See Backtrace.0 file for details

Backtrace.0.txt

  1. Open "he/home/PeleC/Exec/Tutorials/EB_Channel/plt00010/Backtrace.0ader" in ParaView .Open the folder "plt00010" with AMRex/BoxLib Particles Reader
    the error message is
    image
    3.my case is EB_Channel,in fact .I run the all of the Tutorials ,everycase all have the error
    my input is:
    inputs.3d.txt
    env is
    ++++++++++++++++++++++++++++++++++++++++++++
    XDG_VTNR=1
    CPLUS_INCLUDE_PATH=/usr/local/zlib/1.2.8/include:/usr/local/zlib/1.2.8/include:
    MANPATH=/root/sfw/linux/openmpi/1.10.2/share/man:/root/sfw/linux/openmpi/1.10.2/share/man::/usr/local/texlive/2019/texmf-dist/doc/man:/root/sfw/linux/openmpi/1.10.2/share/man::/usr/local/texlive/2019/texmf-dist/doc/man:/root/sfw/linux/openmpi/1.10.2/share/man:/root/sfw/linux/openmpi/1.10.2/share/man::/usr/local/texlive/2019/texmf-dist/doc/man:/root/sfw/linux/openmpi/1.10.2/share/man:
    SSH_AGENT_PID=3417
    XDG_SESSION_ID=1
    AMREX_HOME=/home/AMReX
    DBUS_STARTER_ADDRESS=unix:abstract=/tmp/dbus-FnhZu47jgV,guid=5b8fd94f1eb29e04e53c73be5e1efb27
    HOSTNAME=localhost.localdomain
    IMSETTINGS_INTEGRATE_DESKTOP=yes
    BLD_ROOT=/home/asc/vtf/gnu-opt-mpi
    TERM=xterm-256color
    VTE_VERSION=5202
    XDG_MENU_PREFIX=gnome-
    SHELL=/bin/bash
    HISTSIZE=1000
    PETSC_ARCH=linux-opt
    LIBRARY_PATH=/usr/local/zlib/1.2.8/lib:/usr/local/zlib/1.2.8/lib:
    GNOME_TERMINAL_SCREEN=/org/gnome/Terminal/screen/e0c66b5e_41e0_44ad_a44d_0beab8902b47
    IMSETTINGS_MODULE=none
    QT_GRAPHICSSYSTEM_CHECKED=1
    USER=root
    LD_LIBRARY_PATH=/root/Chombo-3.2/lib:/usr/local/hdf5/hdf5_1.8.16/lib:/usr/local/octave/5.1.10/bin/lib:/usr/lib/python2.7/:/usr/local/zlib/1.2.8/lib:/root/Chombo-3.2/lib:/usr/local/hdf5/hdf5_1.8.16/lib:/usr/local/octave/5.1.10/bin/lib:/usr/lib/python2.7/:/usr/local/zlib/1.2.8/lib:
    GNOME_TERMINAL_SERVICE=:1.147
    SSH_AUTH_SOCK=/run/user/0/keyring/ssh
    SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/3280,unix/unix:/tmp/.ICE-unix/3280
    USERNAME=root
    GNOME_SHELL_SESSION_MODE=classic
    DESKTOP_SESSION=gnome-classic
    MAIL=/var/spool/mail/root
    PATH=/root/bin:/home/asc/vtf/gnu-opt-mpi/bin:/root/bin:/usr/local/vim/bin:/usr/local/octave/5.1.10/bin:/usr/local/texlive/2019/bin/x86_64-linux:/usr/local/visit/bin:/root/sfw/linux/openmpi/1.10.2/bin:/root/bin:/root/bin:/usr/local/vim/bin:/usr/local/octave/5.1.10/bin:/usr/local/texlive/2019/bin/x86_64-linux:/usr/local/visit/bin:/root/sfw/linux/openmpi/1.10.2/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/root/bin:/usr/local/vim/bin:/usr/local/octave/5.1.10/bin:/usr/local/texlive/2019/bin/x86_64-linux:/usr/local/visit/bin:/root/sfw/linux/openmpi/1.10.2/bin:/root/bin:/root/bin:/home/asc/vtf/gnu-opt-mpi/bin:/root/bin:/usr/local/vim/bin:/usr/local/octave/5.1.10/bin:/usr/local/texlive/2019/bin/x86_64-linux:/usr/local/visit/bin:/root/sfw/linux/openmpi/1.10.2/bin:/root/bin:/root/bin:/usr/local/vim/bin:/usr/local/octave/5.1.10/bin:/usr/local/texlive/2019/bin/x86_64-linux:/usr/local/visit/bin:/root/sfw/linux/openmpi/1.10.2/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/root/bin:/usr/local/vim/bin:/usr/local/octave/5.1.10/bin:/usr/local/texlive/2019/bin/x86_64-linux:/usr/local/visit/bin:/root/sfw/linux/openmpi/1.10.2/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/root/bin
    XIM=fcitx
    QT_IM_MODULE=fcitx
    C_INCLUDE_PATH=/usr/local/zlib/1.2.8/include:/usr/local/zlib/1.2.8/include:
    XDG_SESSION_TYPE=x11
    PWD=/home/PeleC/Exec/Tutorials/EB_Channel
    XMODIFIERS=@im=fcitx
    LANG=zh_CN.UTF-8
    GDM_LANG=zh_CN.UTF-8
    MODULEPATH=/usr/share/Modules/modulefiles:/etc/modulefiles
    LOADEDMODULES=
    GDMSESSION=gnome-classic
    PELEC_HOME=/home/PeleC
    HISTCONTROL=ignoredups
    DBUS_STARTER_BUS_TYPE=session
    SHLVL=2
    HOME=/root
    XDG_SEAT=seat0
    GNOME_DESKTOP_SESSION_ID=this-is-deprecated
    PYTHONPATH=/home/asc/vtf/gnu-opt-mpi/./amroc/modules
    XDG_SESSION_DESKTOP=gnome-classic
    LOGNAME=root
    DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-FnhZu47jgV,guid=5b8fd94f1eb29e04e53c73be5e1efb27
    XDG_DATA_DIRS=/root/.local/share/flatpak/exports/share/:/var/lib/flatpak/exports/share/:/usr/local/share/:/usr/share/
    MODULESHOME=/usr/share/Modules
    LESSOPEN=||/usr/bin/lesspipe.sh %s
    PELE_PHYSICS_HOME=/home/PelePhysics
    INFOPATH=:/usr/local/texlive/2019/texmf-dist/doc/info:/usr/local/texlive/2019/texmf-dist/doc/info
    WINDOWPATH=1
    XDG_RUNTIME_DIR=/run/user/0
    DISPLAY=:0
    GTK_IM_MODULE=fcitx
    XDG_CURRENT_DESKTOP=GNOME-Classic:GNOME
    PETSC_DIR=/root/sfw/petsc/3.10.5
    XAUTHORITY=/run/gdm/auth-for-root-655Tx5/database
    COLORTERM=truecolor
    BASH_FUNC_module()=() { eval /usr/bin/modulecmd bash $*
    }
    _=/usr/bin/env
    OLDPWD=/home/PeleC/Exec/Tutorials

Trouble restarting from plt file

Hello, I'm trying to restart from a plt file as recent chk files were damaged by system corruption. However, for this particular case I'm getting errors such as: amrex::Abort::32::Species mass fraction don't sum to 1. The sum is: 0.000000 !!! I've seen related issues before for different cases where the sum was off from 1 that I was able to correct through the massfrac initialization tolerance setting, but in this case it reading zero suggests something else is the matter.

I believe this plt file is intact and definitely should have species data as well. For example, I'm able to load it in yt and view species data including calculating average species concentrations. Do you have any suggestions as to how I should go about figuring out what's wrong with this plt file or restart method? Unfortunately the case is quite large so I can't share an easily reproducible example.

Varying inlet composition with PMF

I'm running simulations using PMF to set the initial and boundary conditions. This works well, but now I'd like to vary the composition at the inlet as a function of time during the simulation.

I can see a way to implement this functionality: basically just copying the PMF stuff and modifying it to read a second file that describes inlet composition as a function of time.

Just wanted to check if this sounds like a good approach, and that I'm not reinventing the wheel etc?

If it sounds good, I'll submit a PR when I have something that works.

Not linking to all PelePhysics directories

$(PELE_PHYSICS_HOME)/Support/Fuego/Evaluation/Make.package

It seems that the files in PelePhysics/Support/Fuego/Evaluation are used for more than just the transport stuff. For example, using Fuego Eos means it has to have access to ReactionData.H. But that directory is only referenced if the transport types are either EGLib or Simple. If we want to use Eos_dir := Fuego and Transport_dir := Constant it can't find ReactionData.H. This would probably extend to other possible combinations as well. Are we supposed to be required to use the Simple or EGLib transport types for the Fuego EOS?

The data extraction script in EB-C7 doesn't working

Commit: 2ee36d7 (update to date version)

After the simulation of EB-C7 finishes, I tried to use pp.py to extract the profile, however, the script shows the following error

Traceback (most recent call last):
  File "/home/lainme/code/amrex/PeleC/build/Exec/RegTests/EB-C7/tests/eb-c7/pp.py", line 49, in <module>
    srt = np.argsort(ray["x"])
                     ~~~^^^^^
  File "/home/lainme/.local/lib/python3.11/site-packages/yt/data_objects/data_containers.py", line 229, in __getitem__
    f = self._determine_fields([key])[0]
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lainme/.local/lib/python3.11/site-packages/yt/data_objects/data_containers.py", line 1465, in _determine_fields
    finfo = self.ds._get_field_info(field)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lainme/.local/lib/python3.11/site-packages/yt/data_objects/static_output.py", line 1009, in _get_field_info
    raise ValueError(
ValueError: The requested field name 'x' is ambiguous and corresponds to any one of the following field types:
 ['boxlib', 'index', 'gas']
Please specify the requested field as an explicit tuple (<ftype>, <fname>).

What's the possible reason of the issue? Is this related to the version of yt?

Cvode Failed in a simple two-dimensional case

I am using PeleC for the computation of a detonation wave, and expected to obtain cellular structures of a propagating detonation wave. A schematic of the computation domain is shown below. A small regime of unburnt gas with a high temperature and pressure is placed at the left for initialization of the detonation wave. The right is filled with unburnt gas. For illustration, the stoichiometric hydrogen/air are considered.
image

However, the computation failed quite rapidly with a cvode error message. I check the output, and find nothing nonphysical. I also tried to change the CFL number, max refine level and ode parameters. But, I failed to find a solution. Could you give some assistance on this issue? I also attached the code for initialization and computation. Please check the DetoCell_XH2_2D.zip for more details.

It's also noted that sometimes I cannot use a high level of refine ratios. The program fails repeatedly. But when I change to a small refine ratios, typically 1 or 2, the program works just fine. What's the possible reason for this phenomenon? Is it related to the sub-cycling step procedure, or just inappropriate choice of refinement criteria?

Thanks in advance.

DetoCell_XH2_2D.zip

No NSCBC Treatment for Species Terms

Currently, when using the NSCBC treatment for BCs in PeleC, the characteristic treatment is used for velocities, pressure and density (for outflow BCs). Species mass fractions are not changed (looking at impose_NSCBC_3d for example), imposing a zero-gradient condition instead. This is leading to crashes when simulating flames exiting a domain. Ideally, the NSCBC treatment would be expanded to include species terms and take into account source and diffusion terms for these species, including reactions. Is this something that has been discussed previously? Currently I'm finding this is a barrier for any such simulations - I've tried the FOExtrap outflow BC as a workaround but pressure reflections in reacting simulations have killed these attempts.

Crash on free(): invalid pointer

One of my 3D PeleC runs crashed with a free(): invalid pointer after almost 4 hours on a few hundred cores. Below is the crash log that I was able to extract. Seems like this is happening in the cleanup stage after a Level 1 solve.

I don't know if this is a useful bug report, nor if it can be reproduced. But let me know if you are interested in trying to chase this one down, and I can provide more details and the case files.

AMReX commit 93fb085d28349 (Nov 1 2019 - this is the "current submodule" for PeleC)
PeleC commit 1821d36 (Feb 13 2020)

[Level 1 step 8611] Advanced 20480 cells
[Level 1 step 8612] ADVANCE with dt = 1.861440021e-08
... Computing MOL source term at t^{n} 
... Computing MOL source term at t^{n+1} 
... Computing reactions for dt = 1.861440021e-08
[Level 1 step 8612] Advanced 20480 cells
*** glibc detected *** PeleC3d.gnu.MPI.ex: free(): invalid pointer: 0x0000000003955350 ***
======= Backtrace: =========
/lib64/libc.so.6[0x398fe75e5e]
/lib64/libc.so.6[0x398fe78cad]
PeleC3d.gnu.MPI.ex[0x501660]
PeleC3d.gnu.MPI.ex[0x5015ef]
PeleC3d.gnu.MPI.ex[0x4fc32e]
PeleC3d.gnu.MPI.ex[0x4fc41e]
PeleC3d.gnu.MPI.ex[0x54bbb6]
PeleC3d.gnu.MPI.ex[0x54bc3a]
PeleC3d.gnu.MPI.ex[0x5489d3]
PeleC3d.gnu.MPI.ex[0x4fd601]
PeleC3d.gnu.MPI.ex[0x5d9644]
PeleC3d.gnu.MPI.ex[0x7a1139]
PeleC3d.gnu.MPI.ex[0x5d6ce7]
PeleC3d.gnu.MPI.ex[0x5d7b48]
PeleC3d.gnu.MPI.ex[0x5cbcbf]
PeleC3d.gnu.MPI.ex[0x41600a]
/lib64/libc.so.6(__libc_start_main+0x100)[0x398fe1ed20]
PeleC3d.gnu.MPI.ex[0x41690d]


Backtrace.139:
(parsed with parse_bt.py)

0: amrex::BLBackTrace::print_backtrace_info(_IO_FILE*) at /home/asmunde/codes/amrex/Src/Base/AMReX_BLBackTrace.cpp:167

1: amrex::BLBackTrace::handler(int) at /home/asmunde/codes/amrex/Src/Base/AMReX_BLBackTrace.cpp:71

8: std::_Rb_tree<std::pair<amrex::IntVect, amrex::IntVect>, std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray>, std::_Select1st<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> >, std::less<std::pair<amrex::IntVect, amrex::IntVect> >, std::allocator<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> > >::_M_erase(std::_Rb_tree_node<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> >*) at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:1854

9: std::_Rb_tree<std::pair<amrex::IntVect, amrex::IntVect>, std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray>, std::_Select1st<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> >, std::less<std::pair<amrex::IntVect, amrex::IntVect> >, std::allocator<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> > >::_M_erase(std::_Rb_tree_node<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> >*) at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_vector.h:434
 (inlined by) ?? at /home/asmunde/codes/amrex/Src/Base/AMReX_Vector.H:29
 (inlined by) ?? at /home/asmunde/codes/amrex/Src/Base/AMReX_FabArrayBase.H:225
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_pair.h:198
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/ext/new_allocator.h:140
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/alloc_traits.h:487
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:650
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:658
 (inlined by) std::_Rb_tree<std::pair<amrex::IntVect, amrex::IntVect>, std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray>, std::_Select1st<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> >, std::less<std::pair<amrex::IntVect, amrex::IntVect> >, std::allocator<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> > >::_M_erase(std::_Rb_tree_node<std::pair<std::pair<amrex::IntVect, amrex::IntVect> const, amrex::FabArrayBase::TileArray> >*) at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:1858

10: amrex::FabArrayBase::flushTileArray(amrex::IntVect const&, bool) const at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/ext/new_allocator.h:125
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/alloc_traits.h:462
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:592
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:659
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:2477
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_tree.h:1125
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_map.h:1032
 (inlined by) amrex::FabArrayBase::flushTileArray(amrex::IntVect const&, bool) const at /home/asmunde/codes/amrex/Src/Base/AMReX_FabArrayBase.cpp:1562

11: amrex::FabArrayBase::clearThisBD(bool) at /home/asmunde/codes/amrex/Src/Base/AMReX_FabArrayBase.cpp:1614

12: amrex::FabArray<amrex::CutFab>::clear() at /home/asmunde/codes/amrex/Src/Base/AMReX_FabArray.H:973

13: amrex::FabArray<amrex::CutFab>::~FabArray() at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_vector.h:434
 (inlined by) ?? at /home/asmunde/codes/amrex/Src/Base/AMReX_Vector.H:29
 (inlined by) amrex::FabArray<amrex::CutFab>::~FabArray() at /home/asmunde/codes/amrex/Src/Base/AMReX_FabArray.H:1129

14: amrex::EBDataCollection::~EBDataCollection() at /home/asmunde/codes/amrex/Src/EB/AMReX_EBDataCollection.cpp:72 (discriminator 1)

15: amrex::EBFArrayBoxFactory::~EBFArrayBoxFactory() at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/ext/atomicity.h:49
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/ext/atomicity.h:82
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/shared_ptr_base.h:166
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/shared_ptr_base.h:684
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/shared_ptr_base.h:1123
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/shared_ptr.h:93
 (inlined by) ?? at /home/asmunde/codes/amrex/Src/EB/AMReX_EBFabFactory.H:27
 (inlined by) amrex::EBFArrayBoxFactory::~EBFArrayBoxFactory() at /home/asmunde/codes/amrex/Src/EB/AMReX_EBFabFactory.H:27

16: amrex::AmrLevel::~AmrLevel() at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_construct.h:107
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_construct.h:137
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_construct.h:206
 (inlined by) ?? at /share/apps/modulessoftware/gcc/gcc-7.3.0/include/c++/7.3.0/bits/stl_vector.h:434
 (inlined by) ?? at /home/asmunde/codes/amrex/Src/Base/AMReX_Vector.H:29
 (inlined by) amrex::AmrLevel::~AmrLevel() at /home/asmunde/codes/amrex/Src/Amr/AMReX_AmrLevel.cpp:533

17: PeleC::~PeleC() at /home/asmunde/codes/PeleC/Source/PeleC.cpp:599

18: amrex::Amr::regrid(int, double, bool) at /home/asmunde/codes/amrex/Src/Amr/AMReX_Amr.cpp:2917

19: amrex::Amr::timeStep(int, double, int, int, double) at /home/asmunde/codes/amrex/Src/Amr/AMReX_Amr.cpp:2030

20: amrex::Amr::coarseTimeStep(double) at /home/asmunde/codes/amrex/Src/Amr/AMReX_Amr.cpp:2439

21: main at /home/asmunde/codes/PeleC/Source/main.cpp:173

23: _start at ??:?


Restarting a reacting simulation from an inert checkpoint file

If a checkpoint file has been generated by the code compiled with USE_REACT = FALSE, the code fails to restart from this checkpoint file if the code is recompiled with USE_REACT = TRUE.

The output throw this error:

restarting calculation from file: chk0000386000
Starting to call amrex_probinit ...
13 variables found in PMF file
2 data lines found in PMF file
Successfully run amrex_probinit
Successfully read inputs file ...
amrex::Error::5::operator>>(istream&,Box&): expected '(' !!!
SIGABRT

I suspect that when the code is compiled with USE_REACT = TRUE, the code is looking for Reactions_Type data that are missing in the checkpoint file. I tried to add a state_in_checkpoint[Reactions_Type] = 0 in the routine PeleC::set_state_in_checkpoint in IO.cpp to force the reader to ignore missing data, but it is unsuccessful.

EB Isothermal BC for a supersonic channel case (@Mach 2.0)

Hello everyone,
I am trying to simulate supersonic flow inside a channel using PeleC. To do so I have taken the EB-Bluffbody case in the Exec/RegTests folder and made the following changes to the inputs.3d file

  1. Made changes to the domain
geometry.prob_hi     =  15.0  6.0  1.5
amr.n_cell           =  160   64   16
  1. Modified BCs using:
pelec.lo_bc       =  "Hard"     "NoSlipWall" "Interior"
pelec.hi_bc       =  "FOExtrap" "NoSlipWall" "Interior"
  1. Made changes to the EB to cover the entire domain
eb2.use_eb2 = 1
eb2.geom_type = box
eb2.box_lo = 0.0 0.0 0.0
eb2.box_hi = 15.0 6.0 1.5 
eb2.box_has_fluid_inside = 1
ebd.boundary_grad_stencil_type = 0
  1. Changed the problem parameters
prob.p = 1013250.0
prob.rho = 0.00116
prob.vx_in =  70000.0
prob.vy_in =  0.0
prob.Re_L = 625.0
prob.Pr = 0.7

Also since I want the walls to be isothermal at 300K I left the

pelec.eb_boundary_T =300
pelec.eb_isothermal =1

as is. My question is, is this methodology right for the problem at hand or I need to modify something more to run this case as intended? The results obtained from the above inputs file look something like so:
Screenshot from 2021-03-30 15-57-26
Here are the contours of the Mach number:
Screenshot from 2021-03-30 16-01-39

There is a nice shock wave and its reflections, however, the walls don't seem to maintain the 300K isothermal boundary condition, rather the temperatures vary along the length of the channel. Hence the whole question on the methodology behind the implementation. Thanks in advance.

amrex::Abort::0::Unable to open input file Turb.test2/HDR !!!

HI, there's something wrong when I tried to run Exec/Production/ChallengeProblem with error note:

Initializing turbInflow inj1 with file Turb.test2 (location coordinates in will be scaled by 1 and velocity out to be scaled by 100) 
amrex::Abort::0::Unable to open input file Turb.test2/HDR !!!
SIGABRT
See Backtrace.0 file for details
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 6.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

I had no idea what exactly caused it. Could you please offer me some advice?
THANKS!

Compilation error --Exec/RegTests/PMF

The current use of "USE_CUDA = TRUE" will cause the following problems

"nvcc -ccbin=g++ -Xcompiler=' -Werror=return-type -g -O3 -pthread -std=c++14' --std=c++14 -Wno-deprecated-gpu-targets -m64 -arch=compute_60 -code=sm_60 -maxrregcount=255 --expt-relaxed-constexpr --expt-extended-lambda -Xcudafe --diag_suppress=esa_on_defaulted_function_ignored --Werror cross-execution-space-call -lineinfo --ptxas-options=-O3 --ptxas-options=-v --use_fast_math -x cu -dc -DBL_NO_FORT -DAMREX_GIT_VERSION="" -DAMREX_USE_GPU_RDC -DAMREX_USE_CUDA -DAMREX_USE_GPU -DBL_COALESCE_FABS -DBL_SPACEDIM=3 -DAMREX_SPACEDIM=3 -DBL_FORT_USE_UNDERSCORE -DAMREX_FORT_USE_UNDERSCORE -DBL_Linux -DAMREX_Linux -DNDEBUG -DAMREX_GPU_MAX_THREADS=256 -DCRSEGRNDOMP -DPELEC_USE_REACTIONS -DPELEC_EOS_FUEGO -Itmp_build_dir/s/3d.gnu.CUDA.EXE -I. -I../../../Submodules/PelePhysics/Source -I../../../Submodules/PelePhysics/Eos/Fuego -I../../../Submodules/PelePhysics/Reactions -I../../../Submodules/PelePhysics/Transport/Simple -I../../../Submodules/AMReX/Src/Base -I../../../Submodules/AMReX/Src/Amr -I../../../Submodules/AMReX/Src/Boundary -I../../../Submodules/AMReX/Src/AmrCore -I. -I../../../Submodules/PelePhysics/Source -I../../../Submodules/PelePhysics/Eos/Fuego -I../../../Submodules/PelePhysics/Reactions -I../../../Submodules/PelePhysics/Support/Fuego/Evaluation -I../../../Submodules/PelePhysics/Support/Fuego/Mechanism/Models/LiDryer -I../../../Submodules/PelePhysics/Transport/Simple -I../../../Submodules/AMReX/Src/Base -I../../../Submodules/AMReX/Src/Amr -I../../../Submodules/AMReX/Src/Boundary -I../../../Submodules/AMReX/Src/AmrCore -I../../../Source -I../../../Source/Params/param_includes -I../../../Submodules/AMReX/Tools/C_scripts -isystem /usr/local/cuda/include -c ../../../Submodules/PelePhysics/Support/Fuego/Mechanism/Models/LiDryer/mechanism.cpp -o tmp_build_dir/o/3d.gnu.CUDA.EXE/mechanism.o
../../../Submodules/PelePhysics/Support/Fuego/Mechanism/Models/LiDryer/mechanism.H(2304): warning: variable "k_r" was declared but never referenced
Segmentation fault (core dumped)
../../../Submodules/AMReX/Tools/GNUMake/Make.rules:198: recipe for target 'tmp_build_dir/o/3d.gnu.CUDA.EXE/mechanism.o' failed
make: *** [tmp_build_dir/o/3d.gnu.CUDA.EXE/mechanism.o] Error 139"

--->The current cuda version is 10.0, gcc version is 7.5.0.

--->The current GNUmakefile is set to

"# AMReX
DIM = 3
COMP = gnu
PRECISION = DOUBLE

PROFILE = FALSE
TINY_PROFILE = FALSE
COMM_PROFILE = FALSE
TRACE_PROFILE = FALSE
MEM_PROFILE = FALSE
USE_GPROF = FALSE

USE_MPI = FALSE
USE_OMP = FALSE
USE_CUDA = TRUE
USE_HIP = FALSE
USE_DPCPP = FALSE

DEBUG = FALSE
FSANITIZER = FALSE
THREAD_SANITIZER = FALSE

USE_REACT = TRUE
Reactor_dir := rk64
USE_EB = FALSE
USE_MASA = FALSE
Eos_dir := Fuego
Chemistry_Model := LiDryer
Transport_dir := Simple

Bpack := ./Make.package
Blocs := .
PELEC_HOME := ../../..
include $(PELEC_HOME)/Exec/Make.PeleC"

Compilation error with OpenMP

Hi,
When I try compiling the EB-BluffBody case with USE_OMP = TRUE, I get the following error:

/home/dash/PeleC/Source/PeleC.cpp: In lambda function:
/home/dash/PeleC/Source/PeleC.cpp:2006:24: error: ‘captured_allow_small_energy’ was not declared in this scope; did you mean ‘captured_allow_negative_energy’?
 2006 |         i, j, k, sarr, captured_allow_small_energy,
      |                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                        captured_allow_negative_energy

I am using OpenMP version 4.5

$ echo |cpp -fopenmp -dM |grep -i open
#define _OPENMP 201511

However, I am not getting such issues when I compile with USE_MPI = TRUE.

Thanks.

PMF compilation fails at link time with Intel v18.0.1 compiler on Cori

Using the Intel v18.0.1 compiler on Cori, PMF compilation fails at link time with many error messages about STL:

friesen@cori07:PMF> head build.err
ifort: remark #10397: optimization reports are generated in *.optrpt files in the output location
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: tmp_build_dir/o/3d.intel.haswell.MPI.EXE/main.o: in function `std::_Deque_base<std::string, std::allocator<std::string> >::_M_destroy_nodes(std::string**, std::string**)':
/global/homes/f/friesen/PeleC/Source/main.cpp:44: multiple definition of `main'; /opt/intel/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64_lin/for_main.o:for_main.c:(.text+0x0): first defined here
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: /opt/intel/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64_lin/for_main.o: in function `main':
for_main.c:(.text+0x2a): undefined reference to `MAIN__'
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: tmp_build_dir/o/3d.intel.haswell.MPI.EXE/LiDryer.o:(.eh_frame+0x11): undefined reference to `__gxx_personality_v0'
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: tmp_build_dir/o/3d.intel.haswell.MPI.EXE/LiDryer.o:(.eh_frame+0x286a): undefined reference to `__gxx_personality_v0'
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: tmp_build_dir/o/3d.intel.haswell.MPI.EXE/AMReX.o:(.data+0x0): undefined reference to `std::cout'
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: tmp_build_dir/o/3d.intel.haswell.MPI.EXE/AMReX.o:(.data+0x8): undefined reference to `std::cerr'
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: tmp_build_dir/o/3d.intel.haswell.MPI.EXE/AMReX.o: in function `amrex::Version()':
friesen@cori07:PMF>

The commits I used to produce this error are:

amrex: 884d3194c
PelePhysics: 2f533e5
PeleC: 9ce5f31

Unable to compile

Hi all,

I just cloned the PeleC repo using git clone --recursive, and when I try to compile the code I receive the following error:
image

I have been using PeleC for a while, and noticed the issue after I pulled down the latest version.
I pulled down the latest version so I could check out ntw/add_plot_derivs and noticed that before I ran git checkout that I was unable to compile. I then made a new clone and still got the issue. What is going on?

TIA

Assertion dt>0.0 failed in PMF-SRK

Hello,

I tried to run the PMF-SRK case and after about 18 timesteps I get a message saying

Assertion `dt > 0.0' failed, file "../../../Source/Timestep.H", line 141, Msg: "ERROR: dt needs to be positive." !!!
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
with errorcode 6.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

I encounter the same issue while running a similar case with LuDME

Any fix that has worked previously?

Thanks a lot

No valid riemann solver found

hello,
I am attempting a 2-d version of the EB_BackStep case in Tutorials. To do so I set DIM=2in the GNUmakefile and created a new inputs.2d file instead of the 3d version. Hitting make said SUCCESS but when I run the program with the inputs.2d file I get an error sayingAborting, no valid riemann solver.
I tried adding pelec.riemann_solver=0 (following the _cpp_parametersfile in Source) or pelec.Riemann=0 as found in the PMF inputs. Observing the PMF 2-d inputs, I notice that MOL has been turned off for some reason. Has this got anything to with this (i.e/ 2-D vs 3-D case)? I tried turning off the HYP_TYPE=MOL in the makefile but that throws an error too.
Any help is much appreciated.
Thanks

How to generate a 2D PMF case?

I’d like to generate a 2D PMF case.

If I simply change the dimension in GNUmake into 2, the information in z axis is reduced.

however, the information in Z axis is the most important using the default settings.

What should I do to reduce the information in x or y?

How to access flow gradients for calculations/visualization?

Howdy,

I am trying to evaluate the strong form of the second law of thermodynamics (as in this paper) for hydrogen combustion. In order to do this, I need access to the velocity gradient tensor, as well as the temperature gradient of the flow. In order to evaluate the entropy inequality, I will also need to calculate the gradients of the mole fractions (assuming that isn't already calculated). It has come to my attention that this is nontrivial within PeleC, as the stencils used get complicated around refinement boundaries. So far, I have been trying to use new derived variables, but am not sure if that is the best way to go about this.

Thank you for your help.

The stress in embedded boundary

Hi, everyone. I find $\tau$ in function "pc_apply_eb_boundry_visc_flux_stencil" set as follows

      const amrex::Real tauDotN[AMREX_SPACEDIM] = {AMREX_D_DECL(
        (static_cast<amrex::Real>(4.0 / 3.0) * coeff(iv, dComp_mu) +
         coeff(iv, dComp_xi)) *
          dUtdn[0],
        coeff(iv, dComp_mu) * dUtdn[1], coeff(iv, dComp_mu) * dUtdn[2])};

I understand that this code caculate $(\tau_{nn},\tau_{nt})$ in 2D, where coeff(iv, dComp_mu) is viscosity $\mu$, coeff(iv, dComp_xi) is bulk viscosity $\xi$ , dUtdn[0] is normal velocity gradient $\frac{\partial U_n }{\partial n}$ .
I don't understand something about this code. The stress $\tau_{nn}$ can be written as $\tau_{nn} = 2\mu \frac{\partial U_n }{\partial n} + \xi div U$. Apply Stokes' hypothesis $\xi = - \frac{2}{3}\mu$, the equation can be written as $\tau_{nn} = \frac{4}{3}\mu \frac{\partial U_n }{\partial n} + \xi \frac{\partial U_t }{\partial t}$. Actually the code calculates $\tau_{nn} = \frac{4}{3}\mu \frac{\partial U_n }{\partial n} + \xi \frac{\partial U_n}{\partial n}$.

I can't fully understand this code, if you can help me I would be very grateful!

Compile error in regression Test PMF

Hello,

I have been using git bisect to find a possible bug while compiling PeleC/Exec/RegTests/PMF.
Maybe I don't have the right environment:
I compile on Cori.
The modules loaded are:
image

I do make -j 32 on Haswell with the default GNUMakefile.

The first "bad" commit is:

image

Thanks,
Hugo

#52 Broke compiling with `HYP_TYPE=MOL USE_EB=TRUE`

Regression test MMS6 and MMS7 use the following in the compile line: HYP_TYPE=MOL USE_EB=TRUE. These tests failed to compile last night. As far as I can tell this is because #52 introduces the following in line 122 of PeleC/Exec/Make.PeleC:

Pdirs := Base EB Amr Boundary AmrCore F_Interfaces F_Interfaces/Base

If I remove F_Interfaces F_Interfaces/Base from that line, then I can compile the test cases again. I don't want to do that because they might be there for a good reason for this so I am looking for guidance on how to fix. Tagging @marient @drummerdoc @rgrout and @jrood-nrel because you might have toughts. Thanks!

Homogeneous label name between PeleC and PeleLM

The Compute_Sl tool from PeleLM could work on PMF from PeleC, if the temperature label name had no capital letter.

PeleLM: temp
PeleC: Temp

Maybe the label could be changed to be homogeneous with PeleLM?

EB Bug for simple cylinder and no flow

I am documenting this here in case someone wants to help out.

Issue:

A zero flow setup in an EB cylinder leads to wonky high velocities at certain coarse-fine interfaces after 1 time step, see image. This should not happen.

Screen Shot 2021-02-03 at 12 53 25 PM

Expected behavior

Velocities should stay zero in the domain. Interestingly, changing the blocking_factor to 8 instead of 4 resolves this issue and gives:

Screen Shot 2021-02-03 at 12 54 28 PM

Steps to reproduce

  1. Build the reproducer case located in this branch in this repo
  2. Run the case that creates the bug: srun -n 36 ./PeleC3d.gnu.MPI.ex inputs_ex max_step=1 amr.plot_int=1 amr.max_level=1 amr.blocking_factor=4
  3. Run the case that does not have the bug srun -n 36 ./PeleC3d.gnu.MPI.ex inputs_ex max_step=1 amr.plot_int=1 amr.max_level=1 amr.blocking_factor=8

Notes

  • changing the number of procs to 1 does not resolve the issue
  • changing the cylinder diameter to 4 also resolves the issue.
  • doing pelec.do_reflux=0 and it goes away

People who might be interested... @baperry2 @hariswaran @jrood-nrel @nataraj2

NoSlipWall in pelelm and pelec

The following two cases are calculated by PeleLM and PeleC. In the results of PeleLM, we can see clear boundary layers, the velocity is very small, but the result of PeleC has no boundary layer, and the velocity on the vertical line is equal, why?
图片1

Merge #20 broke many things. Revert?

@emotheau and @rgrout

Merge #20 into development seems to have broken quite a few things:

  • in Production folder, the following cases disappeared VIF, Jet2d, Ebdemo, VT
  • The following tests are failing Sod (crashes with NaNs), FIAB (diffing), all the MMS (not compiling), TG 2D vortex case (diffing)

Seems to be a lot going on in the merge. Is there a point in breaking it into pieces and doing this incrementally?

Error while compiling

Using the latest version of PeleC.
Tried compiling HIT

In file included from ../../../Submodules/PelePhysics/Reactions/ReactorArkode.cpp:1:
../../../Submodules/PelePhysics/Reactions/ReactorArkode.H:5:10: fatal error: arkode/arkode_arkstep.h: No such file or directory
    5 | #include <arkode/arkode_arkstep.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [../../../Submodules/AMReX/Tools/GNUMake/Make.rules:199: tmp_build_dir/o/3d.gnu.EXE/ReactorArkode.o] Error 1
make: *** Waiting for unfinished jobs....

heat condution test fails

has anyone tested the 1d heat conduction problem? I found PeleC failed to get accurate temperature distributions
图片1

PeleC submodule reference for AMReX does not work anymore

When doing a clone of PeleC with --recursive, as instructed in the PeleC README, the AMReX commit reference does not resolve correctly:

fatal: reference is not a tree: 93fb085d28349cab33892726dfa3107a85a7719e
(....)
Unable to checkout '93fb085d28349cab33892726dfa3107a85a7719e' in submodule path 'Submodules/AMReX'

I've also seen some inconsistencies in commit hashes for AMReX when trying to go back to the same commit as in an a setup on an old machine - I couldn't checkout when trying to copy the hash, and going back in the log on the new machine, I could see the hashes were different for identical commits.

Perhaps AMReX have done a history rewrite or similar?

Comparing with an old repo I have, it seems these two are the old and new hashes for that AMReX commit on Nov. 1 2019 07:54:09 that PeleC wants to refer to:

old:  93fb085d28349cab33892726dfa3107a85a7719e
new:  49499fb36839461460e53278a1d299a20d726c30

Compilation issue gcc 9.3.0 CentOS 7

Hello,

I am supporting several users who report they cannot compile PeleC under CentOS 7.9.2009 on our HPC systems. I am also unable to compile the code. We have tried gcc 4.8.2, gcc 8.2, and gcc 9.3.0. One user at least reports they can compile with gcc 9.3.0 on their home machine. Under CentOS the errors depend on the version of gcc, but for gcc 9.3.0 we get:

module load gcc-9.3.0-gcc-9.2.0-f67yyal
export PELEC_HOME=${HOME}/users/laggad/PeleC
cd ${PELEC_HOME}/Exec/RegTests/PMF
make clean
make

g++ -Werror=return-type -g -O3 -std=c++11 -pthread -DBL_NO_FORT -DAMREX_GIT_VERSION="21.01-47-gc2beebd492f1" -DBL_SPACEDIM=3 -DAMREX_SPACEDIM=3 -DBL_FORT_USE_UNDERSCORE -DAMREX_FORT_USE_UNDERSCORE -DBL_Linux -DAMREX_Linux -DNDEBUG -DCRSEGRNDOMP -DPELEC_USE_REACTIONS -DPELEC_EOS_FUEGO -Itmp_build_dir/s/3d.gnu.EXE -I. -I../../../Submodules/PelePhysics/Eos/Fuego -I../../../Submodules/PelePhysics/Reactions -I../../../Submodules/PelePhysics/Transport/Simple -I../../../Submodules/AMReX/Src/Base -I../../../Submodules/AMReX/Src/Amr -I../../../Submodules/AMReX/Src/Boundary -I../../../Submodules/AMReX/Src/AmrCore -I. -I../../../Submodules/PelePhysics/Eos/Fuego -I../../../Submodules/PelePhysics/Support/Fuego/Evaluation -I../../../Submodules/PelePhysics/Support/Fuego/Mechanism/Models/LiDryer -I../../../Submodules/PelePhysics/Transport/Simple -I../../../Submodules/AMReX/Src/Base -I../../../Submodules/AMReX/Src/Amr -I../../../Submodules/AMReX/Src/Boundary -I../../../Submodules/AMReX/Src/AmrCore -I../../../Source -I../../../Source/Params/param_includes -I../../../Submodules/AMReX/Tools/C_scripts -c ../../../Source/PeleC.cpp -o tmp_build_dir/o/3d.gnu.EXE/PeleC.o
../../../Source/PeleC.cpp: In constructor ‘PeleC::PeleC(amrex::Amr&, int, const amrex::Geometry&, const amrex::BoxArray&, const amrex::DistributionMapping&, amrex::Real)’:
../../../Source/PeleC.cpp:417:32: error: ‘make_unique’ is not a member of ‘std’
old_sources[src_list[n]] = std::make_uniqueamrex::MultiFab(
^
../../../Source/PeleC.cpp:417:64: error: expected primary-expression before ‘>’ token
old_sources[src_list[n]] = std::make_uniqueamrex::MultiFab(
^
../../../Source/PeleC.cpp:419:32: error: ‘make_unique’ is not a member of ‘std’
new_sources[src_list[n]] = std::make_uniqueamrex::MultiFab(
^
../../../Source/PeleC.cpp:419:64: error: expected primary-expression before ‘>’ token
new_sources[src_list[n]] = std::make_uniqueamrex::MultiFab(
^
../../../Source/PeleC.cpp: In member function ‘const amrex::iMultiFab* PeleC::build_interior_boundary_mask(int)’:
../../../Source/PeleC.cpp:2157:21: error: ‘make_unique’ is not a member of ‘std’
ib_mask.push_back(std::make_uniqueamrex::iMultiFab(
^
../../../Source/PeleC.cpp:2157:54: error: expected primary-expression before ‘>’ token
ib_mask.push_back(std::make_uniqueamrex::iMultiFab(
^
make: *** [tmp_build_dir/o/3d.gnu.EXE/PeleC.o] Error 1

Any help would be appreciated.

All the best,

Matthew

GPU used in EB RegTests

I tried to use nvcc to compile several test examples using EB. But part of the compilation fails, and the other part of the compilation will diverge. Are there any tests that can run successfully on the GPU? If so, please tell me which one it is and I will be very grateful for your help :)

Crashing (regression) with EB

I've been running some cases with PeleC with embedded boundaries, of an expanding channel with a flame (3D). Recently I pulled and compiled with the latest development branch of PeleC & PelePhysics & Amrex, probably had not done that since september-ish.

After this pull, all the EB cases I run (even tutorials like EB_Channel) crash after a few timesteps with error messages like shown in the quote below. If I deactivate EB (and MOL) the same cases run fine.

Is there a known regression I'm hitting? Maybe even a known fix? :)

Unfortunately I haven't been able to identify a consistent set of commits that I can return to for running my cases while this is being fixed. If anyone knows of such a set, I'd be very happy to hear about it.

[Level 1 step 2] ADVANCE with dt = 1.387528863e-08
... Computing MOL source term at t^{n}
... Computing MOL source term at t^{n+1}

Error: PeleC_util.F90::compute_temp 3 112 13
... density out of bounds -1.8103574798193642E-003

amrex::Error::205::Error:: compute_temp_nd.f90 !!!
SIGABRT

Running EB-C11 with higher levels of AMR throws error

Hello everyone,

I tried running the EB-C11 (Multispecies Sod Shock Tube) with amr.max_level = 2 after introducing tagging based on temperature as:

tagging.tempgrad = 1000
tagging.max_tempgrad_lev = 3

I get the following error:

Doing initial redistribution... 
amrex::Abort::0::Grids must be properly nested for EB !!!
SIGABRT
See Backtrace.0 file for details

Anything I am missing here?

Thanks a lot

EB RregTest occur oscillation behind the shock surface

Hello everyone,
I am trying to simulate shock interacting with a ramp using PeleC. I make changes on EB C-14. The difference is that I changed the angle of the ramp to 37 and the inflow Mach number to Ma = 4.5. But there will be oscillations after the shock.
pc1

The number of grids in the figure above is 2048 x 1536. No mesh refinement is used in the calculation. I will submit the specific settings in the attachment.
EB_RampShock.zip

I tried to use a coarse mesh to dissipate the oscillation.But the effect is not very good.
I also tried the artificial viscosity coefficient, but found no difference after changing pelec.difmag.
My question is whether my artificial viscosity factor is set correctly. And the cause of this non-physical oscillation and how to suppress it. Is it related to the EB method of the boundary?Extend my sincerest thanks in advance.

EB Bug for simple cylinder

Hi, everyone. I found that the EB setting under certain conditions will cause the calculation to diverge. This error can be reproduced in EB-C13. As I set in example-1.inp, the calculation will diverge during initialization. When I refine the mesh to 80x80 in example-2.inp, the issue is fixed. When I coarsened the mesh to 36x36 in example-3.inp, the issue was also fixed. This should be a Bug of EB. I think there is a data overflow during initialization.
example-inp.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.