Git Product home page Git Product logo

amr-wind's People

Contributors

asalmgren avatar ashesh2512 avatar dependabot[bot] avatar diederikb avatar ewquon avatar gantech avatar gdeskos avatar itopcuoglu avatar jbbel avatar jrood-nrel avatar lawrenceccheung avatar marchdf avatar maxpkatz avatar mbkuhn avatar mchurchf avatar mic84 avatar michaeljbrazell avatar misi9170 avatar moprak-nrel avatar ndevelder avatar neilmatula avatar paulmullowney avatar psakievich avatar rybchuk avatar sayerhs avatar sbidadi9 avatar stephan-rohr avatar tonyinme avatar weiqunzhang avatar yuya737 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amr-wind's Issues

Regression tests not setup properly to fail with different results

Currently regression tests are using fcompare utility to merely compare the run with a gold run and report/print out differences. fcompare utility still returns a success code when it returns and, therefore, the tests always pass as long as there are no problems with execution but the results are different.

Two options:

  • Introduce a check by processing the output of fcompare Run regression tests without MPI :(
  • Change testing behavior to instead use fextrema and then compare the difference against gold files using fextrema_compare.py

The latter option means the gold files will just be text (whitespace delimited table) files and can be easily stored with the repo unlike our current situation, and allows us to do testing in CI environments. However, adds an additional dependency for python packages: pandas and numpy.

Inflow outflow last time-step

When running an inflow/outflow simulation, there are some checks in the code that do not allow the last time-step to be completed if the last time of the simulation is the same as the last time in the boundary data.

This error shows up at the last time-step:

terminate called after throwing an instance of 'std::runtime_error'
  what():  Assertion `(m_in_times[0] <= time) and (time < m_in_times.back())' failed, file "/home/lmartine/amr-wind/amr-wind/wind_energy/ABLBoundaryPlane.cpp", line 483

amr-wind unstable when built with intel 19

I tried building the latest amr-wind (449d2d6) with intel/19.0.5.281 (on Sandia clusters), and code is unstable when I try to run an ABL case.

For instance, using inputabl.i, amr-wind blows up within 10-15 iterations -- the CFL's become unreasonably large before devolving into NaN's.

There are no issues when I use intel/18.0.5.274, the ABL case is stable and runs to completion. Previous commits, such as 9c1adac, compiled and ran fine with intel 19. Talking to @michaeljbrazell, this may be related to the fieldAverages that were put in.

One possible fix suggested by Mike may be to alter compute_averages() in fieldplaneaveraging.cpp to

        amrex::ParallelFor(
            bx, ncomp,
            [=] AMREX_GPU_DEVICE(int i, int j, int k, int n) noexcept {
                const int ind = idxOp(i, j, k);
//                amrex::HostDevice::Atomic::Add(
//                    &line_avg[ncomp * ind + n], fab_arr(i, j, k, n)*denom);
            m_line_average[ncomp * ind + n] += fab_arr(i, j, k, n)*denom;
            });
    }
//    lavg.copyToHost(m_line_average.data(), m_line_average.size());

but I haven't attempted it yet.

I'll report any future progress on this issue.

Lawrence

Boundary conditions

Hello.
Answer me please how i can to do specific time-depends boundary conditions? I want to model jet/flow from left boundary where V_x = const*sin(t) in 5<y<7. My box have parameters xlo, ylo = 0 and xhi, yhi = 10.

Segfault after grid adaption

I am trying to run the abl_godunov_cn case but with adaption turned on by turning off constant density and specifying incflo.graderr. It looks like the grid adaption works but the overall process segfaults before any new iterations are completed on the adapted grid. Solver version and input are below. Entirely possible that I don't have all of the settings in place for AMR. I'm open to any suggestions on how best to get AMR to work with ABL cases.

Solver output before segfault
Regrid mesh ... time elapsed = 0.02479205467
Grid summary:
Level 0 343 grids 8000000 cells 100 % of domain
smallest grid: 24 x 24 x 24 biggest grid: 32 x 32 x 32
Level 1 427 grids 3472384 cells 5.4256 % of domain
smallest grid: 8 x 8 x 8 biggest grid: 32 x 32 x 32

Step: 100 dt: 0.3355257033 Time: 56.5592 to 56.8947
CFL: 0.95 (conv: 0.949537 diff: 0 src: 0.0209629 )

Godunov:
System Iters Initial residual Final residual
----------------------------------------------------------------------------
Segfault

Solver Build Settings
AMR-Wind (https://github.com/exawind/amr-wind)

AMR-Wind Git SHA :: 01187b8
AMReX version :: 20.09-80-g61734d3da08b ( 20.09-80-g61734d3da08b )

Exec. date :: Fri Oct 9 19:49:40 2020
Build date :: Oct 5 2020 19:14:13
C++ compiler :: GNU 7.3.0

MPI :: ON (Num. ranks = 96)
GPU :: OFF
OpenMP :: OFF

Solver Input
time.stop_time = 200.0 # Max (simulated) time to evolve
time.max_step = -1 # Max number of time steps

time.fixed_dt = -0.5 # Use this constant dt if > 0
time.cfl = 0.95 # CFL factor

io.KE_int = 1
io.line_plot_int = 1
time.plot_interval = 100 # Steps between plot files
time.checkpoint_interval = -1000 # Steps between checkpoint files
amr.plt_tracer = 1

incflo.gravity = 0. 0. -9.81 # Gravitational force (3D)
incflo.density = 1.0 # Reference density
incflo.constant_density = 0

incflo.use_godunov = 1
#incflo.diffusion_type = 1
transport.viscosity = 1.0e-5
transport.laminar_prandtl = 0.7
transport.turbulent_prandtl = 0.3333
turbulence.model = Smagorinsky
Smagorinsky_coeffs.Cs = 0.135

incflo.physics = ABL
ICNS.source_terms = BoussinesqBuoyancy CoriolisForcing ABLForcing
BoussinesqBuoyancy.reference_temperature = 300.0
ABL.reference_temperature = 300.0
CoriolisForcing.latitude = 41.3
ABLForcing.abl_forcing_height = 90

incflo.velocity = 6.128355544951824 5.142300877492314 0.0

ABL.temperature_heights = 650.0 750.0 1000.0
ABL.temperature_values = 300.0 308.0 308.75

ABL.kappa = .41
ABL.surface_roughness_z0 = 0.15

amr.n_cell = 200 200 200 # Grid cells at coarsest AMRlevel
amr.max_level = 1 # Max AMR level in hierarchy
time.regrid_interval = 50
incflo.gradrhoerr = 0.0000000000003

geometry.prob_lo = 0. 0. 0. # Lo corner coordinates
geometry.prob_hi = 1000. 1000. 1000. # Hi corner coordinates
geometry.is_periodic = 1 1 0 # Periodicity x y z (0/1)

zlo.type = "wall_model"
zlo.temperature_type = "fixed_gradient"
zlo.temperature = 0.0

zhi.type = "slip_wall"
zhi.temperature_type = "fixed_gradient"
zhi.temperature = 0.003 # tracer is used to specify potential temperature gradient

incflo.verbose = 0 # incflo_level

amrex.fpe_trap_invalid = 0 # Trap NaNs

RayleighTaylor test failing on Intel build

RayleighTaylor regression test has been failing with small diffs since #173 was merged. The most likely cause is the introduction of -fPIC flag during the build. All other changes are on install rules and shouldn't affect the compilation process itself.

Need to investigate if 31d4c9e also fails similarly if -fPIC flag is introduced in Intel builds or if there is something else that changed in #137.

Reuse MacProjector and NodalProjector instances

Issue: Currently AMR-Wind creates a new amrex::MacProjector and amrex::NodalProjector within predictor and corrector steps during time integration. When using hypre as external solver (or bottom solver), this does not allow reuse of hypre setup and preconditioner and is inefficient.

Desired: Store instances of amrex::MacProjector and amrex::NodalProjector across timesteps resetting instances only after a regrid.

Fix ABL boundary planes inflow when using multiple levels

If more than one level is present, reading inflow boundary conditions from higher levels leads to spurious velocities at the coarse-fine interface (see image). Disabling the velocity solve does not fix the issue. Disabling the nodal projection does fix the issue. It is unclear where this issue comes from. The initial thought was this came from ghost cells not being filled correctly at the fine level. This may not be the case... Currently the code aborts if the user tries to use multiple levels for inflow.

Screen Shot 2020-07-13 at 11 40 45 AM

Boundary input file feature on GPU crashing

I am trying to run the regression test case abl_bndry_input on GPUs on Eagle. One key difference is that I tried adding 2 levels of refinement far from the boundary through static refinement. As soon as the code initiates a refinement the code crashes.
I am running the main branch.

 AMR-Wind version :: 319ad956
  
 AMR-Wind Git SHA :: 319ad956f0f85fbe2c41fbec97b69f7b63a190c9

  AMReX version    :: 21.05-20-gfb0c16e34b93

Following is the error message that I see. Somehow the error is with NetCDF complaining No group found which is after running a few time steps.

Regrid mesh ... time elapsed = 0.003602950135
Grid summary:
  Level 0   8 grids  110592 cells  100 % of domain
            smallest grid: 16 x 16 x 16  biggest grid: 32 x 32 x 32
  Level 1   8 grids  64000 cells  7.233796296 % of domain
            smallest grid: 16 x 16 x 16  biggest grid: 24 x 24 x 24
  Level 2   8 grids  64000 cells  0.904224537 % of domain
            smallest grid: 16 x 16 x 16  biggest grid: 24 x 24 x 24

For godunov_type select between plm, ppm, ppm_nolim, weno_js, and weno_z: it defaults to ppm
For godunov_type select between plm, ppm, ppm_nolim, weno_js, and weno_z: it defaults to ppm
Step: 7 dt: 0.4 Time: 2.9 to 3.3
CFL: 0.768155 (conv: 0.768008 diff: 0 src: 0.0106265 )

NetCDF: No group found.

terminate called after throwing an instance of 'std::runtime_error'
  what():  Encountered NetCDF error; aborting
MPT ERROR: Rank 0(g:0) received signal SIGABRT/SIGIOT(6).
	Process ID: 8057, Host: r104u37, Program: /lustre/eaglefs/scratch/syellapa/Wind/WRF/BplaneTest/amr-wind/build/amr_wind
	MPT Version: HPE MPT 2.22  03/31/20 16:12:29

MPT: --------stack traceback-------
MPT: Attaching to program: /proc/8057/exe, process 8057
MPT: [New LWP 8102]
MPT: [New LWP 8078]
MPT: [New LWP 8077]
MPT: [Thread debugging using libthread_db enabled]

On Summit the error shows up as what(): GPU last error detected in file /gpfs/alpine/cfd142/scratch/syellapa/WRF/amr-wind/submods/amrex/Src/Base/AMReX_GpuLaunchFunctsG.H line 1000: misaligned address [f12n08:40078] *** Process received signal ***

@marchdf @sayerhs @jrood-nrel : Have you seen this kind of error before? Can you help me fix this issue?

Thanks

Parallel run of AMR-Wind with OpenFAST/ALM

I'm running an OpenFAST simulation and hitting an error when I use a specific number of cores.

The case is basically the same as @tonyinme's nrel5mw example, with a different turbine model in place (see model here).

It crashes when I use two ranks, but runs fine if I use any other number of cores (1, 3, 4, ...). The error message I get is:

Step: 2 dt: 0.1 Time: 0.1 to 0.2
CFL: 0 (conv: 0 diff: 0 src: 0 )
terminate called after throwing an instance of 'std::runtime_error'
  what():  ParticleContainer::locateParticle(): invalid particle.

Interestingly, it works fine on 2 cores if I change the mesh count, or remove any of the refinement regions. Also, the nrel5mw example works fine on any number of cores.

The input file and openfast model can be downloaded here:
DebugALM.tar.gz
And this is the SHA of the AMR-Wind code I'm using (basically Tony's branch):

==============================================================================
                AMR-Wind (https://github.com/exawind/amr-wind)

  AMR-Wind version :: 08fd758-DIRTY
  AMR-Wind Git SHA :: 08fd7587d9f6eb109c88091bb9e93dde550767e1-DIRTY
  AMReX version    :: 21.04

  Exec. time       :: Thu May 27 13:37:31 2021
  Build time       :: May  3 2021 19:33:58
  C++ compiler     :: GNU 7.2.0

  MPI              :: ON    (Num. ranks = 2)
  GPU              :: OFF
  OpenMP           :: OFF

  Enabled third-party libraries: 
    NetCDF    4.7.3
    OpenFAST  

I will continue debugging the case and let you know if I find anything else going on.

Lawrence

Reynolds Stress Averaging and Actuator Lines documentation

  • Documentation is lagging for Reynolds stress averaging and actuator lines.
  • There is some complicated prefixing that happens for actuator lines and needs to be carefully explained. For example see F2 overriding num_points:
Actuator.type = FlatPlateLine
Actuator.FlatPlateLine.num_points = 21
Actuator.FlatPlateLine.epsilon = 3.0 3.0 3.0
Actuator.FlatPlateLine.pitch = 4.0
Actuator.F1.start = 0.0 -4.0 0.0
Actuator.F1.end = 0.0 4.0 0.0
Actuator.F1.output_frequency = 10
Actuator.F2.start = 1.0 -4.0 0.0
Actuator.F2.end = 1.0 4.0 0.0
Actuator.F2.output_frequency = 20
Actuator.F2.num_points = 101

Hypre tests failing on CUDA builds

MPT ERROR: Rank 1(g:1) received signal SIGSEGV(11).
	Process ID: 23106, Host: r104u33, Program: /lustre/eaglefs/projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/build/amr_wind
	MPT Version: HPE MPT 2.22  03/31/20 16:12:29

MPT: --------stack traceback-------
srun: error: r104u33: task 0: Segmentation fault (core dumped)
srun: Terminating job step 6658342.74
MPT: Attaching to program: /proc/23106/exe, process 23106
MPT: [New LWP 23139]
MPT: [New LWP 23136]
MPT: [New LWP 23134]
MPT: [Thread debugging using libthread_db enabled]
MPT: Using host libthread_db library "/lib64/libthread_db.so.1".
MPT: (no debugging symbols found)...done.
MPT: (no debugging symbols found)...done.
MPT: (no debugging symbols found)...done.
MPT: (no debugging symbols found)...done.
MPT: (no debugging symbols found)...done.
MPT: (no debugging symbols found)...done.
MPT: 0x00002ab9b1d1b199 in waitpid () from /lib64/libpthread.so.0
MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 libibverbs-50mlnx1-1.49017.x86_64 libnl3-3.2.28-4.el7.x86_64 nvidia-driver-latest-cuda-libs-440.33.01-1.el7.x86_64
MPT: (gdb) #0  0x00002ab9b1d1b199 in waitpid () from /lib64/libpthread.so.0
MPT: #1  0x00002ab9b2293c96 in mpi_sgi_system (
MPT: #2  MPI_SGI_stacktraceback (
MPT:     header=header@entry=0x7ffd0b7abf90 "MPT ERROR: Rank 1(g:1) received signal SIGSEGV(11).\n\tProcess ID: 23106, Host: r104u33, Program: /lustre/eaglefs/projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/build/amr_wind\n\tMPT Ver"...) at sig.c:340
MPT: #3  0x00002ab9b2293e8f in first_arriver_handler (signo=signo@entry=11, 
MPT:     stack_trace_sem=stack_trace_sem@entry=0x2ab9c27e0080) at sig.c:489
MPT: #4  0x00002ab9b2294123 in slave_sig_handler (signo=11, 
MPT:     siginfo=<optimized out>, extra=<optimized out>) at sig.c:565
MPT: #5  <signal handler called>
MPT: #6  0x00002ab9b2720892 in hypre_BoomerAMGCoarsenRuge ()
MPT:    from /projects/hfm/exawind/nalu-wind-testing/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/hypre-develop-zm5emir6d2qclzdsf77bluh5plcqddmy/lib/libHYPRE-2.20.0.so
MPT: #7  0x00002ab9b2722a0a in hypre_BoomerAMGCoarsenFalgout ()
MPT:    from /projects/hfm/exawind/nalu-wind-testing/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/hypre-develop-zm5emir6d2qclzdsf77bluh5plcqddmy/lib/libHYPRE-2.20.0.so
MPT: #8  0x00002ab9b27045fb in hypre_BoomerAMGSetup ()
MPT:    from /projects/hfm/exawind/nalu-wind-testing/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/hypre-develop-zm5emir6d2qclzdsf77bluh5plcqddmy/lib/libHYPRE-2.20.0.so
MPT: #9  0x00002ab9b26d0155 in hypre_GMRESSetup ()
MPT:    from /projects/hfm/exawind/nalu-wind-testing/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/hypre-develop-zm5emir6d2qclzdsf77bluh5plcqddmy/lib/libHYPRE-2.20.0.so
MPT: #10 0x0000000000be2aa9 in amrex::HypreIJIface::solve(double, double, int) ()
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/submods/amrex/Src/Extern/HYPRE/AMReX_HypreIJIface.cpp:122
MPT: #11 0x0000000000bb5184 in amrex::HypreABecLap3::solve (this=0x2651a9a0, soln=
MPT:     ..., rhs=..., rel_tol=9.9999999999999998e-13, 
MPT:     abs_tol=9.9999999999999998e-17, max_iter=200, bndry=..., 
MPT:     max_bndry_order=<optimized out>)
MPT:     at /nopt/nrel/ecom/hpacf/compilers/2020-07/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/gcc-8.4.0-2a3vha6hlw4xc5ja3jyhr7huzaxuw2kt/include/c++/8.4.0/bits/unique_ptr.h:345
MPT: #12 0x00000000009f48f9 in amrex::MLMG::bottomSolveWithHypre(amrex::MultiFab&, amrex::MultiFab const&) ()
MPT:     at /nopt/nrel/ecom/hpacf/compilers/2020-07/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/gcc-8.4.0-2a3vha6hlw4xc5ja3jyhr7huzaxuw2kt/include/c++/8.4.0/bits/unique_ptr.h:345
MPT: #13 0x00000000009f6d36 in amrex::MLMG::actualBottomSolve() ()
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/submods/amrex/Src/LinearSolvers/MLMG/AMReX_MLMG.cpp:975
MPT: #14 0x00000000009f7b21 in amrex::MLMG::mgVcycle(int, int) ()
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/submods/amrex/Src/LinearSolvers/MLMG/AMReX_MLMG.cpp:470
MPT: #15 0x00000000009f9a94 in amrex::MLMG::oneIter (this=this@entry=0x26681190, 
MPT:     iter=iter@entry=0)
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/submods/amrex/Src/LinearSolvers/MLMG/AMReX_MLMG.cpp:261
slurmstepd: error: *** STEP 6658342.74 ON r104u33 CANCELLED AT 2021-04-17T01:28:06 ***
MPT: #16 0x00000000009f9d3c in amrex::MLMG::solve(amrex::Vector<amrex::MultiFab*, std::allocator<amrex::MultiFab*> > const&, amrex::Vector<amrex::MultiFab const*, std::allocator<amrex::MultiFab const*> > const&, double, double, char const*)
MPT:     ()
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/submods/amrex/Src/LinearSolvers/MLMG/AMReX_MLMG.cpp:128
MPT: #17 0x0000000000b753d2 in amrex::MacProjector::project(double, double) ()
MPT:     at /nopt/nrel/ecom/hpacf/compilers/2020-07/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/gcc-8.4.0-2a3vha6hlw4xc5ja3jyhr7huzaxuw2kt/include/c++/8.4.0/bits/unique_ptr.h:345
MPT: #18 0x0000000000677a83 in amr_wind::pde::MacProjOp::operator()(amr_wind::FieldState, double) ()
MPT:     at /nopt/nrel/ecom/hpacf/compilers/2020-07/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/gcc-8.4.0-2a3vha6hlw4xc5ja3jyhr7huzaxuw2kt/include/c++/8.4.0/bits/unique_ptr.h:345
MPT: #19 0x0000000000698b7a in amr_wind::pde::AdvectionOp<amr_wind::pde::ICNS, amr_wind::fvm::Godunov, void>::operator() (this=0x273452f0, fstate=amr_wind::N, 
MPT:     dt=0.00052855551668275441)
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/amr-wind/equation_systems/icns/icns_advection.H:236
MPT: #20 0x00000000004424ab in incflo::ApplyPredictor(bool) ()
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/amr-wind/equation_systems/PDEBase.H:119
MPT: #21 0x000000000044293a in incflo::advance (this=this@entry=0x7ffd0b7af720)
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/amr-wind/incflo_advance.cpp:52
MPT: #22 0x0000000000446670 in incflo::Evolve() ()
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/amr-wind/incflo.cpp:231
MPT: #23 0x00000000004348f5 in main ()
MPT:     at /projects/hfm/exawind/nalu-wind-testing/amr-wind-testing/amr-wind/amr-wind/main.cpp:70
MPT: #24 0x00002ab9b4101505 in __libc_start_main () from /lib64/libc.so.6
MPT: #25 0x0000000000440ddc in _start ()
MPT:     at /nopt/nrel/ecom/hpacf/compilers/2020-07/spack/opt/spack/linux-centos7-skylake_avx512/gcc-8.4.0/gcc-8.4.0-2a3vha6hlw4xc5ja3jyhr7huzaxuw2kt/include/c++/8.4.0/bits/char_traits.h:352
MPT: (gdb) A debugging session is active.
MPT: 
MPT: 	Inferior 1 [process 23106] will be detached.
MPT: 
MPT: Quit anyway? (y or n) [answered Y; input not from terminal]
MPT: Quitting: Couldn't write debug register: No such process.
srun: error: r104u33: task 1: Terminated
srun: Force Terminated job step 6658342.74```

PLM Godunov

merge from incflo? "Make sure to test on hoextrap as well as ext_dir when deciding to use a special stencil that uses the boundary value as living at the face not a cell center"

AMReX-Fluids/incflo@2427fd9

GPU compile fails with MASA

==> Loading options from /projects/hfm/shreyas/exawind/exawind-config-gcc.sh
==> Loading options from /home/mhenryde/exawind/source/amr-wind/build/exawind-config.sh
==> Using modules: /nopt/nrel/ecom/hpacf/software/2019-10-08/spack/share/spack/modules/linux-centos7-skylake_avx512/gcc-7.4.0
==> gcc/7.4.0 = /nopt/nrel/ecom/hpacf/compilers/2019-05-08/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.4.0-srw2azby5tn7wozbchryvj5ak3zlfz3r
==> git = /nopt/nrel/ecom/hpacf/utilities/2019-05-08/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/git-2.21.0-g7ulcyu3qf7nc6ul7jm476b7vlaw6yg4
==> binutils = /nopt/nrel/ecom/hpacf/software/2019-10-08/spack/opt/spack/linux-centos7-skylake_avx512/gcc-7.4.0/binutils-2.32-mbon3f7rpfpadv2sxorj2nsl7o4g7kep
==> mpich/3.3.1 = /nopt/nrel/ecom/hpacf/software/2019-10-08/spack/opt/spack/linux-centos7-skylake_avx512/gcc-7.4.0/mpich-3.3.1-jqzb5leuy6stqk3q5kdudrvbf73xabjc
==> cmake = /nopt/nrel/ecom/hpacf/software/2019-10-08/spack/opt/spack/linux-centos7-skylake_avx512/gcc-7.4.0/cmake-3.15.5-syxvvcmizf2ivsmajjz2gpdjknwok7dq
==> netlib-lapack/3.8.0 = /nopt/nrel/ecom/hpacf/software/2019-10-08/spack/opt/spack/linux-centos7-skylake_avx512/gcc-7.4.0/netlib-lapack-3.8.0-y5g3xkzo47ru63gwwemfqe7mottx5kf4
==> cuda/10.0.130 = /nopt/nrel/apps/cuda/10.0.130
==> Activated Eagle CUDA programming environment
==> No user environment actions defined
==> Loading dependencies for amr-wind ...
+ nice -n10  ionice -c3  /usr/bin/gmake -j 1
[ 51%] Built target amrex
[ 51%] Building CUDA object CMakeFiles/amrwind.dir/src/mms/MMS.cpp.o
/home/mhenryde/exawind/source/amr-wind/src/mms/MMS.cpp:6:10: fatal error: masa.h: No such file or directory
 #include "masa.h"
          ^~~~~~~~
compilation terminated.
gmake[2]: *** [CMakeFiles/amrwind.dir/src/mms/MMS.cpp.o] Error 1
gmake[1]: *** [CMakeFiles/amrwind.dir/all] Error 2
gmake: *** [all] Error 2

GeometryRefinement level specification

I think there's a bug with the GeometryRefinement level specification. If I try something like this:

tagging.labels                           = box1 
tagging.box1.type                        = GeometryRefinement
tagging.box1.shapes                      = box1
tagging.box1.level                       = 0
tagging.box1.box1.type                   = box
tagging.box1.box1.origin                 = -1160.0 -580.0 -580.0
tagging.box1.box1.xaxis                  =  3480.0  0.0    0.0
tagging.box1.box1.yaxis                  =  0.0     1160.0 0.0
tagging.box1.box1.zaxis                  =  0.0     0.0    1160.0

The refinement in box1 happens at all levels and I get a larger mesh than I expected. However, min_level and max_level specifications seem to be working fine.

Looking at GeometryRefinement.cpp:

// If the user has requested a particular level then check for it and exit
// early
if ((m_set_level > 1) && (level != m_set_level)) return;

should that be m_set_level > -1 instead?

Lawrence

Inflow/outflow mesh refinement

I am working on creating an example for inflow/outflow simulations with an openfast actuator line turbine and grid refinement. We currently need some workarounds/hacks in order to get this working.

This workaround was done by following the recommendations from @gantech.

The steps to make it work is as follows:

  1. Modify the code in ABLBoundaryPlane.cpp by setting nlevels=1. If this is not done, the boundary data cannot work if there are more than one refinement levels inside the domain.

  2. Simulation 1: Run ABL precursor simulation with boundary data sampling. (This works without a problem)

  3. Simulation 2: Create the refined mesh by running a simulation starting from Simulation 1 for as many time-steps as refinement levels needed and setting time.regrid_interval = 1 in the input file. For example, if we want 2 levels of refinement for the turbine simulation, the simulation 2 will be run for 2 time-steps in order to create the new refined mesh.

  4. Simulation 3: Run a new simulation with the inflow/outflow starting from the last time-step of simulation 2 which contains the refined mesh.

Steps 0 and 2 are hacks that we have in order to make it work. Ideally, we would not need those steps.
I am looking for your feedback on how to improve this?

One option is to hardcode nlevels=1 in ABLBoundaryPlane.cpp .
Ideally, Simulation 2 would not be needed and simulation 3 would automatically refine the mesh in the first time step.

Please let me know what ideas you have to improve this.

@michaeljbrazell @sayerhs @gantech @marchdf @gdeskos @shashankNREL

Error running on SGI

I just built the latest snapshot on an SGI but receive "amrex::Abort::SIGABRT" on startup. The code also generates several Backtrace files. Details from the current build are listed below. I am also including the details of one Backtrace as well as the input for the solver (I have run this solver input through a slightly older build on a Cray w/o issue).

Details about solver version/build:

==============================================================================
                AMR-Wind (https://github.com/exawind/amr-wind)

  AMR-Wind Git SHA :: 01187b82a07e
  AMReX version    :: 20.09-80-g61734d3da08b ( 20.09-80-g61734d3da08b )

  Exec. date       :: Mon Oct  5 19:55:37 2020
  Build date       :: Oct  5 2020 19:14:13
  C++ compiler     :: GNU 7.3.0

  MPI              :: ON    (Num. ranks = 96)
  GPU              :: OFF
  OpenMP           :: OFF

           This software is released under the BSD 3-clause license.           
 See https://github.com/Exawind/amr-wind/blob/development/LICENSE for details. 
------------------------------------------------------------------------------

Output from Backtrace.0

=== If no file names and line numbers are shown below, one can run
            addr2line -Cpfie my_exefile my_line_address
    to convert `my_line_address` (e.g., 0x4a6b) into file name and line number.
    Or one can use amrex/Tools/Backtrace/parse_bt.py.

=== Please note that the line number reported by addr2line may not be accurate.
    One can use
            readelf -wl my_exefile | grep my_line_address'
    to find out the offset for that line.

 0: ~/amr-wind/bin/amr_wind() [0x694501]
    amrex::BLBackTrace::print_backtrace_info(_IO_FILE*)
??:0

 1: ~/amr-wind/bin/amr_wind() [0x69621a]
    amrex::BLBackTrace::handler(int)
??:0

 2: ~/amr-wind/bin/amr_wind() [0x5b83f4]
    void amrex::(anonymous namespace)::(anonymous namespace)::sgetval<bool>(std::__cxx11::list<amrex::ParmParse::PP_entry, std::allocator<amrex::ParmParse::PP_entry> > const&, std::__cxx11::basic_strin
g<char, std::char_traits<char>, std::allocator<char> > const&, bool&, int, int) [clone .part.135]
??:0

 3: ~/amr-wind/bin/amr_wind() [0x5ba7ad]
    amrex::ParmParse::get(char const*, double&, int) const
??:0

 4: ~/amr-wind/bin/amr_wind() [0x51ac63]
    amr_wind::ABLWallFunction::ABLWallFunction(amr_wind::CFDSim const&)
??:0

 5: ~/amr-wind/bin/amr_wind() [0x508a84]
    amr_wind::ABL::ABL(amr_wind::CFDSim&)
??:0

 6: ~/amr-wind/bin/amr_wind() [0x50acc2]
    amr_wind::Factory<amr_wind::Physics, amr_wind::CFDSim&>::Register<amr_wind::ABL>::add_sub_type()::{lambda(amr_wind::CFDSim&)#1}::_FUN(amr_wind::CFDSim&)
??:0

 7: ~/amr-wind/bin/amr_wind() [0x42ea4e]
    amr_wind::CFDSim::init_physics()
??:0

 8: ~/amr-wind/bin/amr_wind() [0x424728]
    incflo::init_physics_and_pde()
??:0

 9: ~/amr-wind/bin/amr_wind() [0x4248f0]
    incflo::incflo()
??:0

10: ~/amr-wind/bin/amr_wind() [0x4100fc]
    main
??:0

11: /lib64/libc.so.6(__libc_start_main+0xf5) [0x2aaaabdc7555]
    __libc_start_main
??:0

12: ~/amr-wind/bin/amr_wind() [0x41675b]
    _start
??:0

Solver input

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#            SIMULATION STOP            #
#.......................................#
time.stop_time               =   10000.0     # Max (simulated) time to evolve
time.max_step                =   -1          # Max number of time steps

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#         TIME STEP COMPUTATION         #
#.......................................#
time.fixed_dt         =   -0.5        # Use this constant dt if > 0
time.cfl              =   0.95         # CFL factor

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#            INPUT AND OUTPUT           #
#.......................................#
io.KE_int = 1
io.line_plot_int = 1
time.plot_interval            =   5000       # Steps between plot files
time.checkpoint_interval           =  -1000       # Steps between checkpoint files
amr.plt_tracer = 1

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#               PHYSICS                 #
#.......................................#
incflo.gravity          =   0.  0. -9.81  # Gravitational force (3D)
incflo.density             = 1.0          # Reference density 
#incflo.constant_density = 0

incflo.use_godunov = 1
incflo.diffusion_type = 1
transport.viscosity = 1.0e-5
transport.laminar_prandtl = 0.7
transport.turbulent_prandtl = 0.3333
turbulence.model = Smagorinsky
Smagorinsky_coeffs.Cs = 0.135

incflo.physics = ABL
ICNS.source_terms = BoussinesqBuoyancy CoriolisForcing ABLForcing
BoussinesqBuoyancy.reference_temperature = 300.0
CoriolisForcing.latitude = 41.3
ABLForcing.abl_forcing_height = 90

incflo.velocity = 6.128355544951824 5.142300877492314 0.0

ABL.temperature_heights = 650.0 750.0 1000.0
ABL.temperature_values = 300.0 308.0 308.75

ABL.kappa = .41
ABL.surface_roughness_z0 = 0.15

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#        ADAPTIVE MESH REFINEMENT       #
#.......................................#
amr.n_cell              = 200 200 200    # Grid cells at coarsest AMRlevel
amr.max_level           = 0           # Max AMR level in hierarchy 
#amr.max_level           = 1           # Max AMR level in hierarchy 
#time.regrid_interval    = 50
#incflo.gradrhoerr       = 0.1

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#              GEOMETRY                 #
#.......................................#
geometry.prob_lo        =   0.       0.     0.  # Lo corner coordinates
geometry.prob_hi        =   1000.  1000.  1000.  # Hi corner coordinates
geometry.is_periodic    =   1   1   0   # Periodicity x y z (0/1)

# Boundary conditions
zlo.type =   "wall_model"
zlo.temperature_type = "fixed_gradient"
zlo.temperature = 0.0

zhi.type =   "slip_wall"
zhi.temperature_type = "fixed_gradient"
zhi.temperature = 0.003 # tracer is used to specify potential temperature gradient

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#              VERBOSITY                #
#.......................................#
incflo.verbose          =   0          # incflo_level

#¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨#
#              DEBUGGING                #
#.......................................#
amrex.fpe_trap_invalid  =   0           # Trap NaNs

Boundary planes for `xhi`/`yhi` planes

While we're discussing making changes to the boundary planes I/O, how difficult would it be to also to capture/reapply data on the xhi/yhi planes:

for (const auto& plane : m_planes) {
amrex::Vector<std::string> valid_planes{"xlo", "ylo"};

// For xlo/ylo combination (currently the only valid
// combination), this is perp[0] (FIXME for future)
const int pp = perp[0];

Obviously this is less urgent than fixing the current issues, but for one of the AWAKEN cases we may have a wind direction that will make use of that (rotating the entire farm and wind direction to fit the xlo/ylo planes is possible, but not the desired approach).

Thanks,
Lawrence

Dynamic refinement question

When using the dynamic refinement through the tagging infrastructure, is it possible for a cell that was once tagged for refinement to be coarsened again when the refinement criteria is no longer exceeded? Or does a cell remain refined once it is ever tagged for refinement?

Reading planar sampling output

Are there any suggestions on how best to view planar sampling output? I have no issues reading volume outputs in Paraview/Visit. But the planar sampling outputs don't want to load natively in either viz software. I have been able to modify directory structure of the sampling results (basically creating a "particles" subdirectory and moving the Header and Level_* into the particles subdirectory) and get those results to load in as AMRex/BoxLib particle data in Paraview. But the visualization of particle data in Paraview seems to apply a spherical Gaussian average to the visualized results (and washes out a lot of the flow structure). I see some images from the Exawind example cases for AMR-Wind (Exawind/
wind-energy/GABLS/AmrWindKsgs_3p125/) that show what I assume is the planar sampled data from those simulations. I would like to do something similar if possible.

Any suggestions would be greatly appreciated.

Specify initial velocity as a profile

I need to specify initial velocity as a profile (rather than as a value that is constant in height) for a simulation I would like to perform, similar to the existing capability to supply a potential temperature profile. I see this listed as a "TODO" in wind_energy/ABLFieldInit.cpp. Are there any plans to move forward with adding this capability?

Processing line sampling data

Hello, I am trying to generate and analyze the data vertically at a point, and I turned on the LineSampler option. After the simulation is finished, in /post_processing/sampling0xxxx, I notice different types of binary files exist (DATA_0xxxx and Particle_H). I am unsure which files contain the line sampling data, I cannot differentiate the two points/labels I specified (ls1 and ls2 below), and I do not know how to use the binary files efficiently. Please kindly give me some suggestions.

The following is the setting in my abl file:

incflo.post_processing = sampling
sampling.output_frequency = 3000
sampling.output_format = "native"
sampling.labels = ls1 ls2
sampling.fields = velocity temperature

sampling.ls1.type = LineSampler
sampling.ls1.num_points = 200
sampling.ls1.start = 500.0 500.0 0.0
sampling.ls1.end = 500.0 500.0 1000.0

sampling.ls2.type = LineSampler
sampling.ls2.num_points = 200
sampling.ls2.start = 2800.0 2800.0 0.0
sampling.ls2.end = 2800.0 2800.0 1000.0

Thank you in advance.

Joseph

Fix derived field computations

Investigate compute_strainrate, compute_gradient, and compute_laplacian

  • Eliminate grownbox ?
  • Eliminate grow ?
  • Check multiple-levels

AMR-Wind fails to compile with HIP

Hello all,
as shown by the CI, PR #413 doesn't compile correctly with HIP.
Wouldn't be appropriate for all the member functions (as for operator[]) of the Slice struct to have the AMREX_GPU_HOST_DEVICE attribute?

Kinetic energy output feels strange

When running the code, I see

Writing plot file       plt00008 at time 4
Time, Kinetic Energy: 3.5, 32.01153325

It feels strange that we say that the one line says we are at time = 4 and the next one says we are at time = 3.5.

I put a PR to address this.

Boundary plane input on GPUs

I am trying to run the abl_bndry_input regression test case on GPUs (Eagle and Summit). I first ran the abl_bndry_output test case to generate all the relevant files to run the abl_bndry_input regression test case. The case works without any problem on CPUs on Eagle. The same case with GPU build on eagle (and Summit) gives the wrong initial residual for Mac projection and crashes in the MAC_projection step.
This issue can be reproduced using the abl_bndry_input regression test on GPU builds.
@marchdf : Can you verify that this case worked on GPUs in the past. Is that something that you could look into?
@jrood-nrel : Can we set up a suite of regression case testing using GPUs on Eagle.

Nacelle force epsilon

It looks like the minimum value of the nacelle epsilon is set to 1 here.

This should be set to eps_min instead?

Geostrophic forcing term

Pull request #146 introduced a geostrophic forcing term to the codebase. This term includes a multiplication by density before accumulating into source term. This should be fixed.

error: 'Gpu' has not been declared

I am trying to build the latest amr-wind on a Cray XC50 with GCC 7.2.0 and receive an error when it is trying to compile FieldPlaneAveraging.cpp. Current git repo hash, cmake statement, and compile errrors are in the attached text file.

amr_wind_build_error.txt

slowdown after adding debugging flags

Hello,

Sometime back I had installed amr-wind successfully using the following commands

git clone --recursive https://github.com/exawind/amr-wind.git
cmake -Bbuild-DPCPP \
    -DCMAKE_CXX_COMPILER_ID="Clang" \
    -DCMAKE_CXX_COMPILER_VERSION=12.0 \
    -DCMAKE_CXX_STANDARD_COMPUTED_DEFAULT=17 \
    -DCMAKE_CXX_COMPILER=$(which dpcpp) \
    -DCMAKE_C_COMPILER=$(which clang) \
    -DAMR_WIND_ENABLE_MPI=OFF \
    -DAMR_WIND_ENABLE_DPCPP=ON .
  cmake --build build-DPCPP -- -j $(nproc)

And the command I use to run the application is:
./amr_wind ../../test/test_files/abl_godunov/abl_godunov.i time.max_step=20

I had to rebuild the repository since it was missing debugging flags. In the new build, I added 2 more options to cmake to make this work.

cmake -Bbuild-DPCPP \
    ...
    -DCMAKE_BUILD_TYPE=RelWithDebInfo
    -DAMReX_DPCPP_SPLIT_KERNEL=FALSE
    ...

And when I tried to run amr_wind again, the execution time has increased drastically. What took ~20seconds before now takes ~10minutes. I guess thats a given with the new cmake options. Could someone give me a better command, one that completes in 20-30 seconds at max? This would be helpful since I need to continuously run the application

undeclared identifier 'Gpu'; did you mean 'amrex::Gpu'?

I am getting this on the latest branch (clang build)

[30/245] Building CXX object CMakeFiles/amrwind.dir/src/incflo_field_repo.cpp.o
FAILED: CMakeFiles/amrwind.dir/src/incflo_field_repo.cpp.o
/Library/Developer/CommandLineTools/usr/bin/c++  -DAMREX_Darwin -DAMREX_FORT_USE_UNDERSCORE -DAMREX_GIT_VERSION=\"20.04.0\" -DAMREX_SPACEDIM=3 -DAMREX_USE_MPI -DAMREX_USE_OMP -DBL_Darwin -DBL_FORT_USE_UNDERSCORE -DBL_SPACEDIM=3 -DBL_USE_MPI -DBL_USE_OMP -I../src -I../src/core -I../src/boundary_conditions -I../src/convection -I../src/derive -I../src/diffusion -I../src/setup -I../src/utilities -I../src/utilities/tagging -I../src/prob -I../src/wind_energy -I../src/equation_systems -I../src/transport_models -I../src/turbulence -I../submods/amrex/Tools/C_scripts -isystem submods/amrex/mod_files -isystem ../submods/amrex/Src/Base -isystem ../submods/amrex/Src/Boundary -isystem ../submods/amrex/Src/AmrCore -isystem ../submods/amrex/Src/Amr -isystem ../submods/amrex/Src/LinearSolvers/MLMG -isystem ../submods/amrex/Src/LinearSolvers/Projections -isystem /usr/local/include -isystem /Users/mhenryde/exawind/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/mpich-3.3.1-dn34cqtj7tlnxzwamooud6rxbdbkro42/include -O3 -DNDEBUG -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk   -Wno-pass-failed -Wall -Wextra -pedantic -Xclang -fopenmp -std=c++14 -MD -MT CMakeFiles/amrwind.dir/src/incflo_field_repo.cpp.o -MF CMakeFiles/amrwind.dir/src/incflo_field_repo.cpp.o.d -o CMakeFiles/amrwind.dir/src/incflo_field_repo.cpp.o -c ../src/incflo_field_repo.cpp
In file included from ../src/incflo_field_repo.cpp:4:
In file included from ../src/equation_systems/PDE.H:7:
../src/equation_systems/PDEOps.H:55:26: error: use of undeclared identifier 'Gpu'; did you mean 'amrex::Gpu'?
#pragma omp parallel if (Gpu::notInLaunchRegion())
                         ^~~
                         amrex::Gpu
../submods/amrex/Src/Base/AMReX_Gpu.H:33:15: note: 'amrex::Gpu' declared here
    namespace Gpu {

Body Force

  • Add a body force term as a momentum source
  • add an input
  • this eliminates the need for set_background_pressure()

amr-wind build failure

Hello,

I tried to build amr-wind using the instructions given at : https://exawind.github.io/amr-wind/user/build.html

Towards the end of the build, I get the following error. Looking at the error, it isn't clear to me what is going wrong.

$ make
...
[100%] Building CXX object CMakeFiles/amr_wind_unit_tests.dir/unit_tests/fvm/test_fvm_ops.cpp.o
[100%] Linking CXX executable amr_wind_unit_tests
/lib/../lib64/crti.o: in function `_init':
(.init+0x7): relocation truncated to fit: R_X86_64_GOTPCREL against undefined symbol `__gmon_start__'
/tmp/incflo_advance-bd9c62.o: in function `_GLOBAL__sub_I_incflo_advance.cpp':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/iostream:74:(.text.startup+0x7): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/tmp/incflo_advance-bd9c62.o: in function `_GLOBAL__sub_I_incflo_advance.cpp':
/home/aarontcopal2/code/dpcpp/amr-wind/amr-wind/incflo_advance.cpp:(.text.startup+0x16): relocation truncated to fit: R_X86_64_REX_GOTPCRELX against symbol `std::ios_base::Init::~Init()@@GLIBCXX_3.4' defined in .text section in /soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../lib64/libstdc++.so
/home/aarontcopal2/code/dpcpp/amr-wind/amr-wind/incflo_advance.cpp:(.text.startup+0x1d): relocation truncated to fit: R_X86_64_PC32 against symbol `__dso_handle' defined in .data.rel.local section in /soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/crtbeginS.o
/tmp/incflo_advance-bd9c62.o: in function `_Alloc_hider':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/basic_string.h:157:(.text.startup+0x2f): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/basic_string.h:157:(.text.startup+0x36): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/basic_string.h:157:(.text.startup+0x3d): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/tmp/incflo_advance-bd9c62.o: in function `std::char_traits<char>::copy(char*, char const*, unsigned long)':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/char_traits.h:365:(.text.startup+0x43): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/char_traits.h:365:(.text.startup+0x4d): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/tmp/incflo_advance-bd9c62.o: in function `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_length(unsigned long)':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/basic_string.h:183:(.text.startup+0x58): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/tmp/incflo_advance-bd9c62.o: in function `std::char_traits<char>::assign(char&, char const&)':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/char_traits.h:300:(.text.startup+0x62): additional relocation overflows omitted from the output
libamrwind_api.so: PC-relative offset overflow in PLT entry for `_ZN2cl4sycl6detail6OSUtil11alignedFreeEPv'
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libamrwind_api.so] Error 1
make[1]: *** [CMakeFiles/amrwind_api.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/lib/../lib64/crt1.o:(.eh_frame+0x20): relocation truncated to fit: R_X86_64_PC32 against `.text'
/lib/../lib64/crti.o: in function `_init':
(.init+0x7): relocation truncated to fit: R_X86_64_GOTPCREL against undefined symbol `__gmon_start__'
/tmp/main-f01081.o: in function `main':
/home/aarontcopal2/code/dpcpp/amr-wind/amr-wind/main.cpp:19:(.text+0x143): relocation truncated to fit: R_X86_64_32 against symbol `std::cout@@GLIBCXX_3.4' defined in .dynbss section in /lib/../lib64/crt1.o
/home/aarontcopal2/code/dpcpp/amr-wind/amr-wind/main.cpp:21:(.text+0x17a): relocation truncated to fit: R_X86_64_32 against symbol `std::cout@@GLIBCXX_3.4' defined in .dynbss section in /lib/../lib64/crt1.o
/home/aarontcopal2/code/dpcpp/amr-wind/amr-wind/main.cpp:21:(.text+0x188): relocation truncated to fit: R_X86_64_32S against symbol `std::cerr@@GLIBCXX_3.4' defined in .dynbss section in /lib/../lib64/crt1.o
/tmp/main-f01081.o: in function `__normal_iterator':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/stl_iterator.h:807:(.text+0x1d8): relocation truncated to fit: R_X86_64_PC32 against symbol `amrex::ParallelContext::frames' defined in .bss section in submods/amrex/Src/libamrex.a(AMReX_ParallelContext.cpp.o)
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/stl_iterator.h:807:(.text+0x2cb): relocation truncated to fit: R_X86_64_PC32 against symbol `amrex::ParallelContext::frames' defined in .bss section in submods/amrex/Src/libamrex.a(AMReX_ParallelContext.cpp.o)
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/stl_iterator.h:807:(.text+0x3b2): relocation truncated to fit: R_X86_64_PC32 against symbol `amrex::ParallelContext::frames' defined in .bss section in submods/amrex/Src/libamrex.a(AMReX_ParallelContext.cpp.o)
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/stl_iterator.h:807:(.text+0x475): relocation truncated to fit: R_X86_64_PC32 against symbol `amrex::ParallelContext::frames' defined in .bss section in submods/amrex/Src/libamrex.a(AMReX_ParallelContext.cpp.o)
/tmp/main-f01081.o: in function `main':
/home/aarontcopal2/code/dpcpp/amr-wind/amr-wind/main.cpp:14:(.text+0x54b): relocation truncated to fit: R_X86_64_32 against symbol `std::cout@@GLIBCXX_3.4' defined in .dynbss section in /lib/../lib64/crt1.o
/tmp/main-f01081.o: in function `~Print':
/home/aarontcopal2/code/dpcpp/amr-wind/submods/amrex/Src/Base/AMReX_Print.H:(.text._ZN5amrex5PrintD2Ev[_ZN5amrex5PrintD2Ev]+0xf): additional relocation overflows omitted from the output
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/binutils-2.34-rnwhrdgiqluqiypg5palnxdxviv3mynt/bin/ld: failed to convert GOTPCREL relocation; relink with --no-relax
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [amr_wind] Error 1
make[1]: *** [CMakeFiles/amr_wind.dir/all] Error 2
/lib/../lib64/crt1.o:(.eh_frame+0x20): relocation truncated to fit: R_X86_64_PC32 against `.text'
/lib/../lib64/crti.o: in function `_init':
(.init+0x7): relocation truncated to fit: R_X86_64_GOTPCREL against undefined symbol `__gmon_start__'
/tmp/test_refinement-215ce8.o: in function `~basic_stringstream':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/sstream:784:(.text+0x752): relocation truncated to fit: R_X86_64_REX_GOTPCRELX against symbol `VTT for std::__cxx11::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >@@GLIBCXX_3.4.21' defined in .data.rel.ro section in /soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../lib64/libstdc++.so
/tmp/test_refinement-215ce8.o: in function `~basic_stringbuf':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/sstream.tcc:291:(.text+0x77f): relocation truncated to fit: R_X86_64_32S against symbol `vtable for std::__cxx11::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >@@GLIBCXX_3.4.21' defined in .data.rel.ro section in /lib/../lib64/crt1.o
/tmp/test_refinement-215ce8.o: in function `~basic_streambuf':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/streambuf:205:(.text+0x7a5): relocation truncated to fit: R_X86_64_32S against symbol `vtable for std::basic_streambuf<char, std::char_traits<char> >@@GLIBCXX_3.4' defined in .data.rel.ro section in /lib/../lib64/crt1.o
/tmp/test_refinement-215ce8.o: in function `~basic_stringstream':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/sstream:784:(.text+0xa92): relocation truncated to fit: R_X86_64_REX_GOTPCRELX against symbol `VTT for std::__cxx11::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >@@GLIBCXX_3.4.21' defined in .data.rel.ro section in /soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../lib64/libstdc++.so
/tmp/test_refinement-215ce8.o: in function `~basic_stringbuf':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/sstream.tcc:291:(.text+0xabf): relocation truncated to fit: R_X86_64_32S against symbol `vtable for std::__cxx11::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >@@GLIBCXX_3.4.21' defined in .data.rel.ro section in /lib/../lib64/crt1.o
/tmp/test_refinement-215ce8.o: in function `~basic_streambuf':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/streambuf:205:(.text+0xae5): relocation truncated to fit: R_X86_64_32S against symbol `vtable for std::basic_streambuf<char, std::char_traits<char> >@@GLIBCXX_3.4' defined in .data.rel.ro section in /lib/../lib64/crt1.o
/tmp/test_refinement-215ce8.o: in function `~basic_stringstream':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/sstream:784:(.text+0x1bc7): relocation truncated to fit: R_X86_64_REX_GOTPCRELX against symbol `VTT for std::__cxx11::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >@@GLIBCXX_3.4.21' defined in .data.rel.ro section in /soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../lib64/libstdc++.so
/tmp/test_refinement-215ce8.o: in function `~basic_stringbuf':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/bits/sstream.tcc:291:(.text+0x1bfa): relocation truncated to fit: R_X86_64_32S against symbol `vtable for std::__cxx11::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >@@GLIBCXX_3.4.21' defined in .data.rel.ro section in /lib/../lib64/crt1.o
/tmp/test_refinement-215ce8.o: in function `~basic_streambuf':
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/gcc-9.3.0-qfmcwfbuvnpn47zxjzfjvodzjl6reerh/lib/gcc/x86_64-pc-linux-gnu/9.3.0/../../../../include/c++/9.3.0/streambuf:205:(.text+0x1c20): additional relocation overflows omitted from the output
/soft/packaging/spack-builds/linux-rhel7-x86_64/gcc-9.3.0/binutils-2.34-rnwhrdgiqluqiypg5palnxdxviv3mynt/bin/ld: failed to convert GOTPCREL relocation; relink with --no-relax
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [amr_wind_unit_tests] Error 1
make[1]: *** [CMakeFiles/amr_wind_unit_tests.dir/all] Error 2
make: *** [all] Error 2

The cmake command that I used was
cmake -DAMR_WIND_ENABLE_TESTS:BOOL=ON -DAMR_WIND_USE_INTERNAL_AMREX:BOOL=ON -DAMR_WIND_ENABLE_DPCPP:BOOL=ON -DCMAKE_BUILD_TYPE:STRING=RelWithDebInfo ../

Im running this command on a Red Hat Enterprise Linux Server release 7.6 OS.

Perform MMS on GPUs

MMS needs to be performed on GPU. Only CPU MMS verification has been done

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.