Git Product home page Git Product logo

warpx's People

Contributors

aeriforme avatar asalmgren avatar atmyers avatar ax3l avatar dpgrote avatar eebasso avatar eloisejyangold avatar ezoni avatar gtrichardson avatar guj avatar iliancs avatar jlvay avatar kzhu-me avatar ldamorim avatar lgiacome avatar lucafedeli88 avatar maxthevenet avatar mrowan137 avatar n01r avatar neilzaim avatar oshapoval avatar philmiller avatar pre-commit-ci[bot] avatar prkkumar avatar remilehe avatar revathijambunathan avatar roelof-groenewald avatar rtsandberg avatar weiqunzhang avatar yin-yinjianzhao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

warpx's Issues

Replace `<variable>_fp` and `<variable>_cp` by a 2-component array?

The WarpX class stores a lot of variables with either an _fp suffix, or a _cp suffix.

e.g.

Vector<std::array< std::unique_ptr<MultiFab>, 3 > > Efield_fp;
Vector<std::array< std::unique_ptr<MultiFab>, 3 > > Efield_cp;

I think some code duplication could be avoided if we use:

Array<Vector<std::array< std::unique_ptr<MultiFab>, 3 > >, 2> Efield;

instead, and index it with PatchType::fine or PatchType::coarse.

@WeiqunZhang @MaxThevenet @atmyers Any opinion on this?

Questions about WarpX

Hi
I have some questions about WarpX usage:

  • Can i use WarpX to simulate beam dynamics with space charge like in former Warp?

  • Is it possible to use WarpX to simulate laser plasma interaction with ultrthin ( 500 nm ) solid target ?

Thanks in advance

Breaking change: allow input decks to work for both 2D and 3D

From our Slack discussion:

It would be useful to be able to run the exact same input deck with 2D and 3D.
(In that case, when run with 2D, variables such as
geometry.prob_lo = -150.e-6 -150.e-6 -50.e-6
would be read in such a way that only the first and last values would be used.)

The motivation is that it would avoid a lot of copy-paste mistake: 2D input decks are usually obtained by copy-pasting 3D input decks, and modifying variables (such as prob_lo) by hand. However, it is easy to forget a variable, and this can have drastic consequences on the simulation.

For variables that are read by AMReX - not WarpX directly - (such as geometry.prob_lo), we should create a corresponding WarpX variable (e.g. warpx.prob_lo) and do something like this:

    amrex::Initialize(argc,argv);
   {
       ParmParse ppw("warpx");
       ParmParse ppg("geometry");
       if (ppw.countval("prob_lo") > 0) {
           Vector<Real> plo;
           ppw.getarr("prob_lo", plo);
#if (AMREX_SPACEDIM == 3)
           ppg.addarr("prob_lo", plo);
#else
           ppg.addarr("prob_lo", {plo[0],plo[2]});
#endif
       }
       // prob_hi, is_periodic, ...
   }

In order to implement the above change we should:

  • Change the way in which most vector variables are read, in 2D
  • Add the above mentioned warpx variables, for variables read by AMReX
  • Update the documentation, and clearly mention that only the first and last variable are read, in 2D (for each vector variable)
  • Remove all the 2D scripts, in the Example folder (they would no longer be needed)

When using GPU: remove atomic add from local j/rho to global j/rho

When running on CPU with OpenMP, we have a local copy of the j/rho arrays for each thread, and we then perform atomic add from the local copy to the global copy.

When running on GPU however, we directly perform atomic adds in the deposition kernel themselves. Therefore, we could run the deposition kernel on the global j/rho directly, and avoid atomic adds from the local j/rho to the global j/rho.

Did you ever get "unexpected" results with Examples/ file inputs.2d.boost?

Hi,

Sorry to bother you all with this, but I was wondering if you knew what would be the expected outcome of running WarpX/Examples/Physics_applications/laser_acceleration/inputs.2d.boost.

I am asking because the results I obtain does not seem like the standard blowout regime to me:
Ey-beam-plasma-00300

The above plot, which I will later refer to as "the original case", shows the plasma electrons position in the boosted frame as black dots that are superimposed to the polarization direction electric field (red-blue colormap). To obtain that plot, I first:

  • updated WarpX, AMReX and PICSAR
  • re-compiled WarpX with make -j 4 DIM=2
  • changed the example input file line to algo.current_deposition = direct
  • commented out the beam and changed nr of particles to 2
    and ran that input file on my computer, which you can find attached:

inputs.2d.boost.txt

Below is the output file I got:
output.txt

I have been testing some aspects, such as the longitudinal resolution, transverse boundary and plasma density.

I doubled the nr of cells in z (to have dx/dz_boost>1) and the result was very different and it does not seem physical to me:

Ey-beam-plasma-00600

I tested the original input, but without periodic boundaries and the results were also different:

Ey-beam-plasma-00300

Doubling the plasma density in the original input resulted in a "quicker/stronger explosion" of the plasma electrons:

Ey-beam-plasma-00300

So I would like to ask you if these results are the expected ones, or if you have any suggestions on how I can start to understand the numerical effects possibly at play in the different cases.

Thanks,
Diana

ps: I started looking at this because my 3D runs were also showing strange results.

Add cuFFT code to spectral solver

While PR #95 implements the spectral solver, it does not link the code to cuFFT. It would be great to add the corresponding code for cuFFT.

Here are a few resources that may help:

  • Compiling the code with the spectral solver + GPU:
    This link explains how to compile the code with the spectral solver (on CPU).
    The documentation also explains how to compile the code for GPU on a local machine and on Summit
    Currently, there is of course no documentation on how to do both (spectral+GPU), since WarpX is not currently able to do so. One would probably have to add a variable CUFFT_HOME in Make.WarpX, similar to FFTW_HOME for CPU.

  • Implementation: There are placeholders for cuFFT code in PR #95, in Source/FieldSolver/SpectralSovler/SpectralFieldData.cpp

  • Testing the code: One possibility is to use the files inputs.multi.rt (to test the 3D implementation) and inputs.multi.2d.rt (to test the 2D implementation), in Examples/Tests/Langmuir:

./warpx3Dexecutable inputs.multi.rt
./langmuir_multi_analysis.py diags/plotfiles/plt00040
./warpx2Dexecutable inputs.multi.2d.rt
./langmuir_multi_2d_analysis.py diags/plotfiles/plt00040

where warpxXXexecutable should be replaced by the proper executable name (in the Bin folder). In each case, the second line checks the validity of the results (the Python script will raise an AssertionError if the results are not physically valid.)

Multiple MR level

All MR simulation have been done with max_level = 1 so far. We should try and go to higher levels. That will probably require changes in the API and in the code (sub-cycling is not recursive so far).

Current deposition should be optimized for GPUs

Currently, the non-optimized current deposition routines (for both direct and Esirkipov) from PICSAR have been ported to the GPU. Versions of these routines that are optimized for the GPU should be created. This will rely on particle tiling being available in AMReX.

Use refinement ratio >2

Currently, WarpX is limited to refinement ratio of 2 between consecutive levels (This is hard-coded in some places). Could this work with higher refinement ratio?

Mirror inside simulation box

For multi-stage simulations, the capability to reflect a laser pulse at a given location z_mirror (given in the lab frame, moving if running in the boosted frame) inside the simulation box will be required.

yt SlicePlot cannot annotate particles in RZ geometry

This input file can be used to run a RZ simulation for beam-driven plasma acceleration.
inputs.2d.txt
The results can be plotted with

import yt ; yt.funcs.mylog.setLevel(50)
ds = yt.load( './plotfiles/plt00200' ) # Create a dataset object
sl = yt.SlicePlot(ds, 2, 'Ex', aspect=.2) # Create a sliceplot object
sl.set_ylabel(r'$z (\mu m)$') # Set labels y
sl.annotate_grids() # Show grids
sl.annotate_particles(width=(10.e-6, 'm'), p_size=2, ptype='driver', col='black')
# sl.show() # Show the plot
sl.save('img.pdf')

but line sl.annotate_particles(...) gives the following error:

YTPlotCallbackError: annotate_particles callback failed with the following error: Could not find field '('driver', 'particle_position_r')' in plt00100.

though the data seems to be there in plotfiles/plt00100/driver.
@atmyers would this require change in yt?

Error when compiling with RZ and PSATD

Dear Developers,

After trying to compile WarpX with USE_RZ=TRUE and USE_PSATD=TRUE (both specified in the GNUmakefile file), I had the following error:

dep.py: error: unrecognized arguments: .RZ.EXE

The code compiles without any issue if USE_PSATD=FALSE.

Kind Regards,

Alexandre Bonatto

Add a CONTRIBUTING.md file

Should explain:

  • How to open a new pull request
  • How to run the tests and make sure that the code is not broken

Develop new Python scripts for automated tests on Travis CI

Right now, WarpX uses the set of Python scripts regression_testing in order to run the tests on Travis CI. However, this set of scripts was not meant for this purpose, and is, in some aspects, not ideal.

It would be nice to develop our own set of small scripts that:

  • parse the list of tests to be performed
  • compile the corresponding executable (and avoid recompiling whenever a test reuses the same compilation options as a previous one)
  • whenever compilation fails: prints the corresponding error message. (This avoids losing time trying to reproduce, on a local computer, a failure that happened on Travis CI, just to obtain the error message.)

Particle momentum output information units

Hi,

This is just a detail.
It seems to me that the beam momentum output data units are in SI (kg m/s), but the values are given by ฮฒ ฮณ c (missing the m_e factor responsible for the kg part).

I got this idea from running the 2D LWFA example which defines an electron beam with:

beam.uz_m = 500.
beam.uz_th = 50.

And reading the data in a jupyter notebook:

import yt
import numpy as np
import scipy.constants as cte
import math

ds = yt.load('./plt00000/')
ad = ds.all_data()
part_vz=ad['beam','particle_momentum_z'].to_ndarray()

print ds.field_info["beam", "particle_momentum_z"].get_units()
print part_vz[0]/cte.c
print part_vz[2]/cte.c

Which outputs include:

\rm{kg} \cdot \rm{m} / \rm{s}
416.08929429053114
588.7482681922667

Which are values close to the normalized parameters introduced in the input file.

Please feel free to let me know if this is just a result of me misunderstanding the data and how I should try to read it correctly.

Thanks,
Diana

WarpX code run errors

Hi, When i running the warpX code, some errors appeared, which I have attached it in attachment file,please help me to check it and resolve it ,thank you !
slurm-11555516.txt

back-transformed diagnostics broken

Back-transformed diagnostics are broken in 2D and 3D. On dev branch, you can run Examples/Physics_applications/laser_acceleration/inputs.2d.boost and read the data in snapshot 3 for instance. This Python code snippet can be executed in a Jupyter notebook:

import read_raw_data
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook

iteration = 3
xmax = 128.e-6
xmin = - xmax

snapshot = './lab_frame_data/' + 'snapshot' + str(iteration).zfill(5)
header   = './lab_frame_data/Header'
allrd, info = read_raw_data.read_lab_snapshot(snapshot, header) # Read field data
Ex = allrd['Ex']
Ey = allrd['Ey']
Ez = allrd['Ez']
nx, nz = Ex.shape
z = info['z'] # Get box length in z
x = np.linspace(-xmax, xmax, nx) # x is not in info

plt.figure(figsize=(6, 8))
extent = np.array([info['zmin'], info['zmax'], xmin, xmax])
plt.subplot(3,1,1)
plt.imshow(Ex, aspect='auto', extent=extent, cmap='seismic')
plt.colorbar()
plt.subplot(3,1,2)
plt.imshow(Ey, aspect='auto', extent=extent, cmap='seismic')
plt.colorbar()
plt.subplot(3,1,3)
plt.imshow(Ez, aspect='auto', extent=extent, cmap='seismic')
plt.colorbar()

It plots the following images:
Screen Shot 2019-05-13 at 11 26 59 AM

A few things are puzzling:

  • The input script reads laser1.polarization = 0. 1. 0. so that the laser field should show in Ey, but it is in Ex
  • The order of magnitude for Ex and Ey are wrong (the laser field should be ~2e12, not 3e20).

Note that the issue may come from WarpX back-transformed diagnostics or Tools/read_raw_data.py that is used here. @RemiLehe @atmyers would you happen to know what is happening there?

Thanks!

Laser antenna is ignored if it is not initially inside the box

WarpX currently ignores any laser antenna which is not initially inside the box, even if the moving window later contains that laser antenna.

Here is an example using Examples/Physics_applications/laser_acceleration/inputs.2d.

  • With the original parameters (laser antenna at z=9 microns, is inside the box at t=0): The laser is indeed emitted, and is in the box at iteration 400.
    image

  • With the laser antenna at z=13 microns (knowing that the box at t=0 extends to z=12 microns): no laser is emitted:
    image

PML around the patch boundaries and not at the box boundaries

Currently, the way for handling boundary conditions is not very intuitive. The arguments are:

  • geometry.is_periodic: 3 bools for whether to apply periodic boundary conditions in x, y, and z at the boundaries of the simulation box
  • warpx.do_pml: whether to do PMLs boundaries.

The combination of these arguments allow some combinations and not others.
geometry.is_periodic = 0 0 0 and warpx.do_pml = 0: box reflective, patch no PML
geometry.is_periodic = 0 0 0 and warpx.do_pml = 1: box PML, patch PML
geometry.is_periodic = 1 1 1 and warpx.do_pml = 0: box periodic, patch no PML
geometry.is_periodic = 1 1 1 and warpx.do_pml = 1: box periodic, patch PML
Reflective boundaries for the box and PML for the patch is not possible.

One option would be to separate them further, like:

warpx.boundary_conditions = periodic periodic periodic
warpx.do_pml_patches = 1

where warpx.boundary_conditions would take one value per direction, among periodic, pml and reflective and warpx.do_pml_patches is a bool for whether to use PML at the patch boundaries inside the simulation box.

checkpoint/restart gives a segfault

The checkpoint/restart capability seems to have been broken: when testing it with the input file attached, writing the checkpoint files works well but I receive a segfault at the first iteration when trying to restart.

Running in DEBUG mode, the error reads

Grids Summary:
  Level 0   3 grids  55040 cells  100 % of domain
            smallest grid: 80 x 224  biggest grid: 80 x 240
STEP 101 starts ...
0::Assertion `static_cast<long>(i) < this->size()' failed, file "/Users/mthevenet/amrex//Src/Base/AMReX_Vector.H", line 31 !!!
SIGABRT

Furthermore, I think the checkpoint/restart capability has never been working with

warpx.do_moving_window = 1
warpx.do_plasma_injection = 1

This is not an urgent issue, but it will be helpful in the future.
Thanks!

inputs.txt

AddPlasma should be ported to the GPU

When running on GPUs, the AddPlasma routine currently generates the particles on the host, then copies them to the device in one batch. Currently, this does not appear to be a bottleneck, at least on the small tests I did. However, this may not be the case in the future. This should be confirmed on more realistic problem setups, and - if needed - AddPlasma should be ported to generate the particles on the GPU directly.

Translate Fortran routines to GPU-friendly C++

We are progressively moving from Fortran to C++, for portability and consistency reasons. All functions declared in Source/FortranInterface/WarpX_f.H should be replaced with C++ functions as part of WarpX. This issue is labeled good_first_issue as it contains a lot of small tasks, some of which should be a good introduction to WarpX way for portability to GPU (though not all). Useful information can be found in the AMReX documentation, on AMReX basics and AMReX on GPU.

In order to coordinate our work, the table below shows who is working on what. Please update it whenever you start moving any Fortran routine to C++, or if it contains any error, or if you think some functionalities should be removed/added (it is not exhaustive yet).

Functionnality Rountines Assignee Status
copy attribs warpx_copy_attribs @LDAmorim Done (#206)
divE, divB warpx_compute_divb_{2d,3d,rz} warpx_compute_dive_{2d,3d,rz} @dpgrote Done (#192)
Yee solver pxrpush_em{2d,3d}_bvec [should we include ckc and push F?] @dpgrote Done (#210)
push PML WRPX_PUSH_PML{...} (many of them) @RevathiJambunathan in progress (#231)
Direct Deposition depose_jxjyjz_generic{,_2d} see this issue @MaxThevenet Done (#207 )
Esirkepov Deposition @dpgrote Done #227
Field Gather geteb{2dxz,3d}_energy_conserving_generic @MaxThevenet Done (#221)
Particle pusher warpx_particle_pusher{,_momenta} @RemiLehe @dpgrote Done (#160)
AddPlasma see this issue @WeiqunZhang Done
Boosted frame drags see this issue ???
Routines in WarpX_f.F90
Laser/WarpX_f.F90
Charge deposition apply_rz_volume_scaling_rho @dpgrote Done (#263)

Laser acceleration example in documentation vs GitHub

Hi,

This is not an important issue and it might be because I am making a confusion.

But I thought I could let you know about it.

The laser initialization section of the laser acceleration example input-deck that can be found in the folder Examples/Physics_applications/laser_acceleration/inputs.3d of the current master branch differs from the one that can be downloaded from the website documentation at https://ecp-warpx.github.io/doc_versions/dev/running_cpp/examples.html.

The respective sections are:

warpx.use_laser    = 1
laser.profile      = Gaussian
laser.position     = 0. 0. 9.e-6        # This point is on the laser plane
laser.direction    = 0. 0. 1.           # The plane normal direction
laser.polarization = 0. 1. 0.           # The main polarization vector
laser.e_max        = 16.e12             # Maximum amplitude of the laser field (in V/m)
laser.profile_waist = 5.e-6             # The waist of the laser (in m)
laser.profile_duration = 15.e-15        # The duration of the laser (in s)
laser.profile_t_peak = 30.e-15          # Time at which the laser reaches its peak (in s)
laser.profile_focal_distance = 100.e-6  # Focal distance from the antenna (in m)
laser.wavelength = 0.8e-6               # The wavelength of the laser (in m)

and

lasers.nlasers      = 1
lasers.names        = laser1
laser1.profile      = Gaussian
laser1.position     = 0. 0. 9.e-6        # This point is on the laser plane
laser1.direction    = 0. 0. 1.           # The plane normal direction
laser1.polarization = 0. 1. 0.           # The main polarization vector
laser1.e_max        = 16.e12             # Maximum amplitude of the laser field (in V/m)
laser1.profile_waist = 5.e-6             # The waist of the laser (in m)
laser1.profile_duration = 15.e-15        # The duration of the laser (in s)
laser1.profile_t_peak = 30.e-15          # Time at which the laser reaches its peak (in s)
laser1.profile_focal_distance = 100.e-6  # Focal distance from the antenna (in m)
laser1.wavelength = 0.8e-6               # The wavelength of the laser (in m)

in the documentation and GitHub found input-decks, respectively.

The result is that the documentation input-deck leads to null fields (i.e. no laser initialized).

Please feel free to let me know if you need more detailed information to check this.

Cheers,
Diana

Refined solution shifted when using mesh refinement

When using Mesh refinement in 2D, the refined solution in the patch is shifted along one axis. The shift is exactly of one fine cell:

Solution without mesh refinement and a fine grid :
Ex_it01000_REF_buff0

Solution with mesh refinement
Ex_it01000_buff0

You can see the shift by observing the position of the red particles at the middle.

Here is the script used to generate the solutions :
Reference solution :
input_REF_slv-ckc_buff0_factGrid2.txt

Solution with Mesh Refinement:
input_slv-ckc_buff0_factGrid1.txt

WarpX code running errors

Hi, after I installed the WarpX code, then I want to run the examples in ../Examples/Physics_applications/laser_acceleration/laser_acceleration_PICMI.py. When I submit the job by a job script file

#!/bin/bash
source /WORK/app/toolshs/cnmodule.sh
source /WORK/app/osenv/ln1/set2.sh
export FFTW_HOME=/WORK/app/fftw/3.3.5-double
yhrun --cpu_bind=sockets -N 2 -n 48 ../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex laser_acceleration_PICMI.py

The errors occurered :

/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.19' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.5' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.19' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.5' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)
/WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /WORK/ac_siom_jsliu_1/qzy/WarpX/lwfa/../warpx/Bin/./main2d.gnu.TPROF.MPI.PSATD.ex)

Dumping only a fraction of plasma macro-particles

Hi all,

To make the results analysis faster (i.e. dumped files smaller), me and Maxence discussed the possibility of choosing to store the information of only a fraction of the total macro-particles in the dumped data.

The selection of the fraction of macro-particle could be done randomly for all macro-particles in the simulation box at each dumped time step.

This would be useful for the plasma species, as we might not need the information of all its macro-particles position/momentum/weight.

It could also be applied to beam species, possibly in large 3D runs where 50% of the beam macro-particles could already give enough information to determine the beam properties.

ps: Instead of setting a fraction value, maybe we could also select a fixed number of macro-particles. Just in case we still wish to see plasma macro-particles also at the end of the down-ramp where the simulation box might contain too few total plasma macro-particles (too few for the fraction to correspond to 1 or more macro-particle).

Cheers,
Diana

Reading plotfiles from boosted frame examples - ds.field_list error message

Hi all,

I am opening this issue, just in case any of you have encountered a similar error message before (and also if I find out what mistake I am making, by writing it here no one will repeat it!).

I am trying to change one function (warpx_copy_attribs - which affects back transformed diagnostics) from Fortran to CPP. When I tested the changes in my fork and branch, with the boosted frame examples of the repo, I wasn't able to read the plotfiles information with yt.

Using a jupyter notebook with commandsds=yt.load(f) and then reading the ds.domain_... worked well, but the commandds.field_list would lead to the following errors:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-29-373a8b1d101f> in <module>
      1 # Reading and storing the data
      2 for dirn in range(ldirs):
----> 3     read_data(dirn)

<ipython-input-28-7d03cadc1e74> in read_data(dirn)
     26         print(extent[dirn][iterf])
     27 
---> 28         print(ds.field_list)
     29 
     30         for pn in part_name:

~/anaconda2/lib/python3.7/site-packages/yt/data_objects/static_output.py in field_list(self)
    544     @property
    545     def field_list(self):
--> 546         return self.index.field_list
    547 
    548     def create_field_info(self):

~/anaconda2/lib/python3.7/site-packages/yt/data_objects/static_output.py in index(self)
    502                 raise RuntimeError("You should not instantiate Dataset.")
    503             self._instantiated_index = self._index_class(
--> 504                 self, dataset_type=self.dataset_type)
    505             # Now we do things that we need an instantiated index for
    506             # ...first off, we create our field_info now.

~/anaconda2/lib/python3.7/site-packages/yt/frontends/boxlib/data_structures.py in __init__(self, ds, dataset_type)
   1468                 mass_name = 'particle%.1d_mass' % i
   1469                 self.parameters[charge_name] = val[0]
-> 1470                 self.parameters[mass_name] = val[1]
   1471 
   1472     def _detect_output_fields(self):

IndexError: list index out of range

Although the command ad = ds.all_data() wouldn't lead to any error message, I was unable to load 'electrons' data, getting a similar error message as the one above.

Did any of you ever encounter this error message before?

I tested the same inputs with the dev branch (updated) and I still got the same issue. I didn't have any problem opening the plot files when they resulted from the example input file for modeling the uniform plasma.

I have been (and will keep) trying to figure out my mistake that is causing this. But decided to write it down here in case any of you have any suggestions of how to diagnose it. Right now my plan is to start adding details from the boosted frame in the uniform plasma input until it "breaks".

Thanks,
Diana

installing wrapx code errors

Hi,
I want to install the WarpX code, when i set USE_PSATD = FALSE and type the command make -j 4, it can be compiled successfully, but once I set USE_PSATD = TRUE and compile the code, errors occurred:
13

the compiling environment is also shown in figure above. can you help me to resolve it ? thank you !

Avoid code duplication for deposition and gather in buffers

When using mesh refinement, WarpX offers the capability to deposit charge and current, or to gather fields, from buffers at a level different from the particle's. The implementation leads to code duplication (roughly the same thing is done twice, once inside the patch and once in the buffers).

This (merged) PR #195 proposes a way to avoid this duplication for current deposition. The same approach could be used to clean the code for charge deposition (this should be very similar to current deposition) and field gather (may be more complicated).

Port Vay pusher to GPU

Currently, only the Boris pusher is ported to the GPU. The Vay pusher should be ported as well.

Problem compiling PICSAR with intel compiler

Hi
I tried to compile the PICSAR library with intel compiler v17.0.4 and get some compilation errors.
Here the dump:

mpif90 -O3 -fopenmp -JModules -ftree-vectorize -c -o src/modules/modules.o src/modules/modules.F90
ifort: command line warning #10006: ignoring unknown option '-JModules'
ifort: warning #10193: -vec is default; use -x and -ax to configure vectorization
mpif90 -O3 -fopenmp -JModules -ftree-vectorize -c -o src/field_solvers/Maxwell/yee_solver/yee.o src/field_solvers/Maxwell/yee_solver/yee.F90
ifort: command line warning #10006: ignoring unknown option '-JModules'
ifort: warning #10193: -vec is default; use -x and -ax to configure vectorization
src/field_solvers/Maxwell/yee_solver/yee.F90(393): error #8194: The COLLAPSE clause or the ORDERED clause with a value require perfectly nested loops: no code may appear before or after the loops being collapsed.
!$OMP DO COLLAPSE(2)
------^
src/field_solvers/Maxwell/yee_solver/yee.F90(415): error #8194: The COLLAPSE clause or the ORDERED clause with a value require perfectly nested loops: no code may appear before or after the loops being collapsed.
!$OMP DO COLLAPSE(2)
------^
compilation aborted for src/field_solvers/Maxwell/yee_solver/yee.F90 (code 1)
Makefile:350: recipe for target 'src/field_solvers/Maxwell/yee_solver/yee.o' failed

Any idea what went wrong?
I could not send this report to the PICSAR bitbucket repository since there is no issues
directory visible. I hope you could help though.

Denis

Poor performance of the parser for large runs on Cori

The parser shows poor performances when running large simulations on Cori.

As the parser is not thread-safe, using it automatically turns on the flag warp.serialize_ics which serializes plasma injection. In simulations where a plasma is constantly injected at one boundary of the box, it would significantly slow down procs where plasma injection occurs.

One solution would be to add OpenMP pragmas to the parser, or to re-write the parser in C++ to easily deal with threading.

In order to check correctness, an input file and a python script are attached, that check that the plasma density injected with the parser is correct. These scripts are not meant to be used to assess performances. I do not manage to upload files without extension or with .py extension, so I added a .txt extension to both of them...

inputs.txt
plot.py.txt

Explain WarpX parallelization in the documentation

We should add a dedicated page, in the documentation, about the way in which WarpX parallelizes the simulation.

I think this is important because:

  • WarpX parallelizes in different way than most other PIC codes (i.e. several boxes per MPI)
  • It very easy, as a user, to make choices that will result in poor performance: e.g. choosing a small max_grid_size (esp. in 2D and on GPU), or choosing a number of MPI ranks for which there is one box on some ranks and 2 boxes on some other ranks (in this case, the MPI ranks with one box will be idle ~50% of the time)

The documentation should explain:

  • How the simulation domain is decomposed into boxes ; what max_grid_size and blocking_factor
  • How the boxes are distributed to the MPI ranks (e.g. space filling curve initially, possibility to do dynamic load balancing)
  • What are the pitfalls when choosing max_grid_size and the number of MPI ranks.

Boosted-frame diagnostics give strange results when using mesh refinement.

PR #6 extended the boosted-frame diagnostics so that they also take into account finer levels.

However, it seems that this produces strange results, with "lighter" fields in the mesh refinement patch:
foo

Here is the script that reproduces the problem:
inputs.txt
and here is the plotting code (which uses read_raw_data.py from Tools in the WarpX repository)

# Import statements
import os, glob
import yt ; yt.funcs.mylog.setLevel(50)
import numpy as np
import matplotlib.pyplot as plt
import scipy.constants as scc
from read_raw_data import read_data
import read_raw_data

def get_particle_field(snapshot, species, field):
    fn = snapshot + '/' + species
    files = glob.glob(os.path.join(fn, field + '_*'))
    files.sort()
    all_data = np.array([])
    for f in files:
        data = np.fromfile(f)
        all_data = np.concatenate((all_data, data))
    return all_data

species = 'beam'
iteration = 3
xmax = 256.e-6

snapshot = './lab_frame_data/' + 'snapshot' + str(iteration).zfill(5)
header   = './lab_frame_data/Header'
xmin = - xmax
allrd, info = read_raw_data.read_lab_snapshot(snapshot, header) # Read field data
F = allrd['Ex']
nx, nz = F.shape
x = np.linspace(-xmax, xmax, nx)
z = info['z'] # Get box length in z. x is not in info
print(F.shape)
xbo = get_particle_field(snapshot, species, 'x') # Read particle data
ybo = get_particle_field(snapshot, species, 'y')
zbo = get_particle_field(snapshot, species, 'z')
uzbo = get_particle_field(snapshot, species, 'uz')

plt.figure(figsize=(6, 3))
extent = np.array([info['zmin'], info['zmax'], xmin, xmax])
plt.imshow(F, aspect='auto', extent=extent, cmap='seismic', vmax=2e10, vmin=-2e10)
plt.colorbar()
plt.plot(zbo, xbo, 'k.', markersize=1.)
plt.savefig('foo.png', bbox_inches='tight')

Support for non-cuda-aware mpi

Currently, when running on the GPU, WarpX requires a Cuda-aware mpi implementation when communicating the particle data (the mesh data does not require one). This is restricting when running on your local workstation. An option should be added to AMReX to allow non-cuda-aware mpi in the particle communication code.

OpenPMD output capabilities

When running large simulations, writing the data into plotfiles is extremely fast, but reading the data, either particle data or grid data, can be time-consuming. One reason is that the data is spread in multiple boxes, to be able to account for mesh refinement. For many application, even when the simulation runs with mesh refinement, it would be useful to have a lower-resolution output using the OpenPMD standard, and if possible the hdf5 format, that would be easier and faster to read.

Furthermore, plotfiles can be massive as all grid quantities and all particles of all species are written to disk. Having the possibility to chose the data written to plotfiles would also be helpful.

At the end of a simulation, two folders would be created: plotfiles and diags (this name is arbitrary). Here are the details of what this issue proposes for each of these output:

plotfiles output

Currently, all grid data and all particle data are dumped, so that plotfiles contain grid quantities Ex, Ey, Ez, Bx, By, Bz, jx, jy, jz and particle quantities particle_Ex, particle_Ey, particle_Ez, particle_Bx, particle_By, particle_Bz, particle_momentum_x, particle_momentum_y, particle_momentum_z, particle_position_x, particle_position_y, particle_position_z, particle_cpu, particle_id, particle_xold, particle_yold, particle_zold, particle_uxold, particle_uyold, particle_uzold and particle_weight for all species. It would be useful if the user could specifies the species to dump and which particle and field quantities to dump, e.g. with

amr.plot_fields    = Ex, Ez, By, jz
amr.plot_species   = electron ion
amr.plot_particles = particle_position_z, particle_momentum_uz

Output compliant with the OpenPMD standard

The simulation directory would contain a second output directory, say diags, that contains a smaller amount of pre-processed OpenPMD-compliant output. Two types of output are considered below for a 3d simulation: sub-sampled arbitrary-dimension grid data and filtered particle data. The user would specify the name of each diagnostics with

warpx.diagnostics = dump3d line particle_dump
dump3d.type = grid_diagnostics
line.type = grid_diagnostics
particle_dump.type = particle_diagnostics

so that a user could define several grid_diagnostics objects, etc. Here is a more detailed description of each of these:

grid_diagnostics

The user specifies the fields and the output number of points as well as the boundaries of the data to dump, for instance

dump3d.fields = Ex, Ez, By
dump3d.n_cell = 128 64 64 # write 128 points in x and 64 in y and z.
dump3d.lo = -5.e-6 -5.e-5 -30.e-6
dump3d.hi =  5.e-6  5.e-5 120.e-6

for a 3d grid or

line.fields = Ex, Ez, By
line.n_cell = 1 1 64 # write 128 points in x and 64 in y and z.
line.lo = 0 0 -30.e-6
line.hi = 0 0 120.e-6

for a 1d line. The same syntax would also allow 2d slices.

Particle filters

The user specifies the species and can chose a simple filter, for instance

particle_dump.species = beam
particle_dump.filter = "uz > 10."

Note that the specific options to enter in an input file are still to be determined, as they should comply with the PICMI standard, so the choice proposed above is not permanent.

The user may want to dump small data with high frequency (for instance retrieving a line-out at every time), which will probably require buffering in order not to ruin the simulation performances.

Future improvements

  • Allow grid_diagnostics to return data on a grid of arbitrary dimension that is not aligned with the simulation box. For instance, a user may want to return the Ex field on a line that goes through (0, 0, 0) with direction (1, 0, 1).
  • Allow grid_diagnostics to return data on a set of points that are not aligned, for instance on a sphere or the arc of a circle.

Back-transformed diags without moving window

For the moment, a simulation in a boosted frame can run with or without moving window, but one can get back-transformed diagnostics only if the moving window is on. Otherwise, one get the error

0::Assertion `do_moving_window' failed, file "./Source/WarpX.cpp", line 301, Msg: "The moving window should be on if using the boosted frame diagnostic." !!!
SIGABRT

However, it is sometimes useful to run a simulation in a boosted frame without a moving window, or with a moving window propagating along z at speed -c. The input file attached shows the issue, and one can look at the output data both in the boosted frame and back-transformed to the lab frame with warpx/Tools/Visualization.ipynb.
Is it possible to have back-transformed diagnostics working independently from a moving window?
Thanks!
inputs.txt

Use the same current deposition code for laser particles and physical particles

The code that performs the current/charge deposition differs in PhysicalParticleContainer.cpp (where it has e.g. been optimized for GPU on the gpu branch) and in LaserParticleContainer.cpp.

Ideally, it would be nice to have a single version of that code, as a member function of the parent WarpXParticleContainer.

Bad inputs options for GPUs should fail earlier

Certain inputs options, such as the optimized current and charge deposition kernels, don't work on the GPU, nor should we expect them to. However, if you accidentally put one of these in your inputs file, currently the code will segfault, usually only after many timesteps. We should perform a validation step where the code checks whether any of these bad options have been set and fails early with an informative error message.

Status of the spectral solver

This issue keeps track of the progress on the spectral solver, and what remains to be done:

  • Port the non-hybrid solver from Fortran to C++ #95
  • Implement real-to-complex FFT #98
  • Have the solver work on GPU, by linking cuFFT #96 - implemented in #177
  • Allow the spectral solver to compile for GPU without FFTW being installed (i.e. put all FFTW include/calls inbetween #ifdef)
  • Implement OpenMP parallelization and tiling, for the relevant parts of the spectral solver
  • Stop using the hybrid decomposition, when only local FFT are performed #99
  • Implement spectral solver with PML #122
  • Have the spectral solver work with mesh refinement #251
  • Have the spectral solver work with load-balancing #1139
  • Implement the Galilean algorithm #704
  • Implement more advanced Galilean-like algorithms #869
  • Investigate and fix the small differences (typically 1.e-3 relative difference) between the psatd.hybrid_mpi_decomposition=0 (using C++ code) and psatd.hybrid_mpi_decomposition=1 (using PICSAR Fortran code) - see #104

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.