Git Product home page Git Product logo

hermes-3's People

Contributors

bendudson avatar bshanahan avatar hahahasan avatar johnomotani avatar mikekryjak avatar nass13m avatar tbody-cfs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hermes-3's Issues

Check/account for fuel dilution due to impurities in fixed fraction

Related to #128
Radas also gives the (mean charge) curve. In a hot, strongly-seeded SOL, a non-negligible fraction of electrons in the SOL will be associated with the impurity (c_z * = fraction of electrons associated with impurity).

Is it worth/possible to account for this dilution?

At a minimum, it might be worth writing out the dilution into the output, even if we don't use it, to make sure that we haven't associated >100% of electrons in SOL with the impurity :)

mean_charge_state

PI Controller broken

In the 1D-recycling example (ran exactly as-is), the PI controller does not modify the initial flux source.
Looking at the "density_source_multiplier_d+" diagnostic variable reveals that it's zero. This means the source should be zero, but it's not.

Symptoms

  • Input source does not change (constant Sd+_src over time)
  • Domain particle count increases forever (volume integral of (Ne + Nn))

Expected behaviour

  • If upstream density above target, input source should go to zero until upstream density is lower than target. Then the controller should add a source proportional to how far away it is.

Reproduction steps

  • Run the 1D-recycling example and look at Sd+_src (the particle source) over time in some of the first cells. You will find it doesn't change over time. You can also do a volume integral of the plasma and neutral density and find that it increases over time.

Plot of upstream density, upstream density target and particle source:
image

Plot of total particles in the domain over a whole 800 timesteps:
image

Inconsistency in neutral diffusion variable names

Recent PR added diagnostic variable of neutral diffusion in the form Dnn<species>, e.g. Dnnd. This was done in neutral_mixed which is the component responsible for the 2D neutral model.

This convention is not consistent with the 1D parallel diffusion variables implemented in this PR for evolve_density, evolve_momentum and evolve_pressure, where neutral deuterium diffusion would be Dd_Dpar.

No strong feelings on convention, but I would suggest renaming Dnnd to Dd_D.. and potentially adding the other flavours like in the other PR.

Linear examples cross section not plotting correctly

Running plot_crossection.py after running any of the linear examples results in a blank plot:

Ne_crossection_1

It appears that g_33 is not being correctly output to the data file as g_33 = collect("g_33", path=path, yind=yind).squeeze() in plot_crossection.py yields an array of 1's. Hard coding in the correct array for g_33 in plot_crossection.py yields the correct plot:

Ne_crossection_1Works

I believe the issue was introduced somewhere in #72 as this issue is not present in #86 (the previous merge pull request) and prior.

Inconsistent pressure/energy convention

Hermes-3 solves heat transport through a pressure equation. Heat is represented inconsistently throughout the code, and is sometimes in a pressure and sometimes in an thermal energy form. This makes it easy to get confused. I propose that we modify the code so that heat is always represented in the form of energy. This will make it clearer and more future proof as we add molecules.

Examples:

  • External ion/electron heat source: pressure
  • Ion/electron perpendicular transport terms: energy
  • Neutral perpendicular transport terms: pressure
  • Atomic reaction heat transfer channels: energy

Need better compilation process & documentation

The "default" Hermes-3 compilation procedure loads BOUT++ as an external module, but it does not download and compile PETSc, and no guidance on this is provided in the documentation.
I think we need to implement scripts to get PETSc in the correct way (as per @johnomotani's scripts and my version of them). This is especially important since the example cases are now using PETSc solvers by default.

  • Add guidance to documentation, including the required versions of BOUT++ and PETSc and references to install scripts
  • Implement PETSc download and compilation in CMake (?)

@bendudson @johnomotani is the latter possible?

New way to define puffs and pumps

Currently we can have a neutral puff by manually adding a density or energy source to the grid file which is then picked up by neutral_mixed. This was done in #106.
A neutral pump will soon be included in #155 and it's a good opportunity to rethink how we define regions like these.

Setting custom fields in the grid file is not very user friendly because you have to modify the grid to change the source. This is borderline unacceptable for the pump, because one of the most common 2D workflows is to adjust the pumping speed to reach a specific OMP separatrix density and this can require many changes.

Other codes such as SOLEDGE2D and SOLPS allow you to designate the puff and pump regions in the meshing stage. This is not feasible for us as we don't have an interactive meshing tool with this capability.

We think a good solution would be to use the grid file to define the region where you want a puff and pump to be, and then the actual sources can be controlled in the input file. The region definition will be done using one of my existing python scripts which will be implemented into xHermes with an example notebook provided in the new Scripts directory in the Hermes-3 repo.

I propose we define new fields called has_pump and puff_id. We would then need to write a specific neutral_puff component to pick this up and assign a source. We could have several of these and link them to the appropriate ID defined by puff_id as well as to the appropriate species.

The neutral pump is a bit more tricky because the pump will be implemented as an edge recycling region in recycling.cxx (being developed here: #155) with a different recycling coefficient/albedo. Because it depends on edge recycling being present, it sounds less straightforward to have multiple pumps as you'd need to have multiple recycling components. I think we can go for just having one possible albedo for now.

Need to rethink neutral_boundary.cxx and extend it to SOL/PFR

We currently have a component which allows neutrals to lose their energy to the sheath called neutral_boundary.cxx.
It works in a sheath boundary like manner: the heat flux to the target is calculated as:

q = gamma_heat * nnsheath * tnsheath * v_th

where gamma_heat is a user specified sheath heat transmission coefficient, nnsheath and tnsheath are the neutral density and temperature at the wall and v_th is the thermal velocity of the neutrals at the wall calculated as v_th = sqrt(tnsheath/AA).

What codes like SOLPS and SOLEDGE2D do is they allow neutrals to reflect thermally (i.e. a neutral goes to the wall, recombines and returns as a molecule with the temperature of the wall) or through fast reflection (neutral bounces off the wall). The ratio of thermal to fast reflection is computed from tables in the TRIM database.

This is different to our implementation in neutral_boundary, where we lose neutral thermal energy to the wall as per the gamma setting and the thermal speed and do not prescribe a returning energy.

I think that to be consistent with our assumption of 100% thermal reflection + instant dissociation in the recycling component we should make neutrals lose all of their energy and momentum to the wall and return as atoms of 3eV.
What do you think @bendudson ?

Another question is how to calculate the heat transfer into the wall. I can't remember what the reason behind the sheath-like calculation was. Why can't we just do a free extrapolation into the wall?

The reason I bring this up is because we need this BC to work in 2D for the SOLPS/SOLEDGE2D comparison and I will be modifying it soon.

Anomalous sheath flux flag does not prevent anomalous flow through outer targets

I've been dealing with a really challenging mass balance problem for a couple of weeks. My test case has only a Dirichlet core source and regular simple sheath targets - no sources, no extra recycling or anything like that. The targets are draining 15% more particles than the core is providing.

I have now finally figured it out thanks to a suggestion from @bendudson... it's the anomalous diffusion! My outer targets somehow have particles coming into the domain, and accounting for this resolves the particle balance.
anomalous_diffusion has a flag anomalous_sheath_flux which is false by default (and it is default in my model).
It prevents flow by setting the boundary conditions to Neumann:

if (!anomalous_sheath_flux) {
// Apply Neumann Y boundary condition, so no additional flux into boundary
// Note: Not setting radial (X) boundaries since those set radial fluxes
for (RangeIterator r = mesh->iterateBndryLowerY(); !r.isDone(); r++) {
N2D(r.ind, mesh->ystart - 1) = N2D(r.ind, mesh->ystart);
T2D(r.ind, mesh->ystart - 1) = T2D(r.ind, mesh->ystart);
V2D(r.ind, mesh->ystart - 1) = V2D(r.ind, mesh->ystart);
}
for (RangeIterator r = mesh->iterateBndryUpperY(); !r.isDone(); r++) {
N2D(r.ind, mesh->yend + 1) = N2D(r.ind, mesh->yend);
T2D(r.ind, mesh->yend + 1) = T2D(r.ind, mesh->yend);
V2D(r.ind, mesh->yend + 1) = V2D(r.ind, mesh->yend);
}
}

It's a bit tricky to debug this because the BC is overwritten by the sheath boundary condition later, which (I think) means the user cannot recover the exact state the simulation is in when the code makes this calculation. The only clue I have is that all the flow seems to be on the outer targets while the inner targets have exactly zero flow.

@bendudson maybe we could just set the anomalous diffusion coefficients to 0 in the final cells instead of temporarily changing the BC? Would there be any issues with this?

Set fields to 0 in guard cells if not used for calculation

Currently many fields (such as atomics channels and others) have garbage in the guard cells. While fields such as the ionisation source doesn't use the guard cell values, fields such as ion momentum use only the inner cells and ignore the outer ones. This is confusing to new users and requires careful treatment in post-processing.

It would be very helpful if unused guard cell values in all fields were set to 0.

Add more detail on slope limiter choice in docs

Current testing shows that MinMod is excellent for 1D cases which could use more numerical dissipation, but the extra dissipation changes the result by a reasonable amount in my 2D test cases. The docs should give more guidance with regards to the appropriate slope limiter choice. Happy to do this one, just leaving an issue so I don't forget.

Unpredictable coordinate normalisation in output

I am finding that geometry parameters (dx, dy, dz, J, g_22 etc) are sometimes stored in normalised and sometimes in unnormalised units within the Hermes-3 dump files. I am now able to reproduce this consistently:

  • I have two cases that are quite similar and have an identical grid
  • I read each one and sum its dx : print(collect("dx", path = casepath, info = False).sum())
  • I find that the first case sums to 5.74 and the second to 458349.82.
  • Both are correct, but the former is in SI units. I know this because checking the product dx * dy * dz * J results in the correct domain volume.

I've had a look through Hermes-3 code and I'm not sure where the metric components get written to file, or where they could be normalised. I did find this which is part of the Mesh Python function somewhere within BOUT++ and normalises the geometry, but I can't find it being called anywhere.
I also found that the geometry is loaded in load_metric here where the dimensions are normalised (since the grid supplies them in SI units). However, I can't see them being written to file with set_with_attrs() anywhere in the code.
I have looked through BOUT++'s Coordinates.cxx and found where the coordinates are calculated and communicated (here, but I am not sure what "communicate" means (to the grid? to the solver?) and cannot find any normalisations there either.

Any ideas @bendudson ?

Inconsistent input file normalisation handling

There are some key input file settings that need to be in normalised units, for example when you are setting a boundary condition or an initialisation function.
However, there are also some user inputs such as recycle_energy which are in SI units.
I think this is confusing - I just made the mistake of normalising recycle_energy in the input file unnecessarily.
Is there a way to change function and boundary conditions to SI units, or alternatively, shall we just settle on normalised units in the input file?

@bendudson let me know what you think.

Improve BOUT-dev module

By default, Hermes-3 (and SD1D) download and compile their own BOUT-dev. However, this, from what I can tell, requires you to have PETSc already compiled and the environment variables set in the right way for you to have PETSc support (never mind Hypre).

This makes it challenging to get PETSc capability which is extremely useful for running SD1D - especially in environments where there are already other PETSc installations for other codes.

There is a great way to compile BOUT that @johnomotani prepared: https://github.com/boutproject/BOUT-configs/tree/generic-next.
These scripts automatically compile PETSc and link it to BOUT without putting anything on PATH and interfering with the environment.

Every time I am compiling Hermes-3 or SD1D I am having to compile BOUT separately in the way above which is the only way I know how to do it and not break things. I have also told others to do it this way as it's the easiest right now.

It would be really, really awesome if Hermes-3 and SD1D downloaded BOUT the way that @johnomotani's scripts do it.

Feature wish list

  • LaTeX output: It would be good to have a way to check which equations the code was solving. One way would be to output LaTeX equations.
    The terms included may depend on the quantities available in the state, so this should be taken into account. This would help catch errors caused by putting components in the wrong order.

Incorrect console output when restarting

When restarting a case, the console output iteration count does not reset to zero and time goes negative:

Sim Time  |  RHS evals  | Wall Time |  Calc    Inv   Comm    I/O   SOLVER

5.988e+05          1       1.40e-01   -23.9    0.0   24.5  130.0  -30.6
|  Step 250 of 250. Elapsed 0:00:00.1 ETA 0:00:-0.3	Option pardiv:type = cyclic (default)
	Option fft:fft_measurement_flag = estimate (default)

6.011e+05      13656       9.07e+00    78.9    0.0    1.5    0.7   18.9
/  Step 250 of 250. Elapsed 0:00:09.2 ETA 0:00:-18.1
6.035e+05       6164       4.26e+00    76.9    0.0    1.6    1.0   20.5
-  Step 251 of 250. Elapsed 0:00:13.5 ETA 0:00:-12.8
6.059e+05       6313       4.33e+00    77.4    0.0    1.5    1.0   20.1

This was flagged by Brendan and George but I haven't seen it myself until now. I suspect this could be because I only just updated to the latest master. Will see if I can reproduce with an example case and figure out which commit it was.

compilation can not pass

Hello,

I compile hermes-3 under BOUT-dev master brach (now the latest version is 4.3.1 ), it can not pass, the error is shown below. It seems that "include/hermes_utils.hxx" is missing.

  Compiling  collisions.cxx
collisions.cxx:6:10: fatal error: ../include/hermes_utils.hxx: No such file or directory
 #include "../include/hermes_utils.hxx"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[1]: *** [../../../make.config:326: collisions.o] Error 1
make: *** [../../make.config:141: src] Error 2

Thanks.

Inconsistent velocity definition

In evolve_momentum.cxx, we define velocity as flux divided by density * atomic number. This is also where velocity and density are saved out.

V = NV / (AA * N);

But in sheath_boundary_simple.cxx, we no longer have the atomic number factor when calculating the guard cell flux NVi[ip]. N.B. this expression comes from NVi(sheath) = [NVi(guard) + NVi(last)]/2 rearranged into NVi(guard) = [2*NVi(sheath) -NVi(last)].

NVe[im] = 2 * Me * nesheath * vesheath - NVe[i];

This is also visible in results. Performing NVd+ / Nd+ / Vd+ on raw results (with no de-normalisation and no processing) results in a factor 2 everywhere apart from in the guard cell.

I am getting myself a bit confused as to which one is right. I thought that NVi = Ne * Vi by definition and represents the flux of particles, thereby having no link to atomic number or mass. Another confusion is that NVi is often labelled as momentum while (I think?) it's flux in [m^-2s-1].

@bendudson thoughts?

Errors and suggestions for documentation/code clarity

1. Diffusion documentation error

Under neutral_parallel_diffusion in this part of the manual (https://hermes3.readthedocs.io/en/latest/components.html#species-pressure-and-temperature) the equation for the diffusion coefficient is:

D_n = (B/B_pol)^2 * T_n / A*nu

I think that the T_n should be v_th**2. It actually is T_n in the code, but this is because of the thermal velocity normalisation (leading to T = v_th**2 if both T and v_th are normalised).

2. Diffusion code clarity

I think it may be useful to modify the code as well so this fact is easier to understand. In SD1D vth_n is explicitly calculated from T and there is a helpful comment that makes it clear what's going on. Maybe we could modify Hermes-3 to suit.

SD1D approach:

[561] BoutReal vth_n = sqrt(tn); // Normalised to Cs0
...
[581] Dn(i, j, k) = dneut(i, j, k) * SQ(vth_n) / sigma;

Permalink to the above line in SD1D code: https://github.com/boutproject/SD1D/blob/60160d8d9df3eee8f265d02d6b69f2d70e3a5276/sd1d.cxx#L581

3. AMJUEL reaction code clarity

In the AMJUEL reaction (which currently calculates recombination, ionisation and excitation) there can be energy and momentum transfer. For example, an ionisation reaction results in a source of energy and momentum which comes through the neutral that has just been ionised. The "donor" and "recipient" species are both described as ions which is confusing, especially since both recombination and ionisation feature a neutral and an ion. I suggest we change this to "from_species" and "to_species":

subtract(from_ion["density_source"], reaction_rate);
add(to_ion["density_source"], reaction_rate);

Inconsistent radiation sign convention

The fixed fraction impurity radiation diagnostic variable is written as positive (i.e. energy lost from plasma is positive):

// Remove radiation from the electron energy source
subtract(electrons["energy_source"], radiation);
}
void outputVars(Options& state) override {
AUTO_TRACE();
if (diagnose) {
set_with_attrs(state[std::string("R") + name], radiation,
{{"time_dimension", "t"},
{"units", "W / m^3"},
{"conversion", SI::qe * Tnorm * Nnorm * FreqNorm},
{"long_name", std::string("Radiation cooling ") + name},
{"source", "fixed_fraction_radiation"}});
}
}

Whereas the hydrogenic radiation is written as negative (i.e. energy lost from plasma is negative, which is also the SD1D convention):

if (diagnose) {
S = reaction_rate;
F = momentum_exchange;
E = energy_exchange;
R = -energy_loss;
}
}

https://github.com/bendudson/hermes-3/blob/f7574a6a140689be356bc8d066cf7f0a03974482/include/amjuel_hyd_ionisation.hxx#L90C65-L97

I think we should use the latter convention to be more consistent with SD1D.

Some examples broken

Running examples/tokamak/diffusion results in:
Error encountered: Option h:charge has no value--
The file has no charge or AA for h.

Running examples/tokamak/diffusion-conduction results in:

Unused option 'Te:bndry_all', did you mean:
        Pe:bndry_all            # type: string
        Nh+:bndry_all           # type: string
        Ph+:bndry_all           # type: string
        e:bndry_flux            # type: bool, doc: Allow flows through radial boundaries
'Unused option 'Th+:bndry_all', did you mean:
        Nh+:bndry_all           # type: string
        Ph+:bndry_all           # type: string
        Pe:bndry_all            # type: string
        h+:bndry_flux           # type: bool, doc: Allow flows through radial boundaries`

Running examples/tokamak/recycling and examples/tokamak/diffusion-transport results in a crash:

Sim Time  |  RHS evals  | Wall Time |  Calc    Inv   Comm    I/O   SOLVER

0.000e+00          1       2.55e-02   -331.9    0.0  344.4  203.0  -115.6
|  Step 0 of 50. Elapsed 0:00:00.0 ETA 0:00:01.3
[CVODE ERROR]  CVode
  At t = 0 and h = 1.842e-16, the corrector convergence test failed repeatedly or with |h| = hmin.


[CVODE ERROR]  CVode
  At t = 0 and h = 1.842e-16, the corrector convergence test failed repeatedly or with |h| = hmin.

Removing anomalous_d from the electrons resolves the crash. Replacing it with a really small number or some other number doesn't seem to resolve it.

Bundled googletest is old

The bundled googletest is very old.
This is causing e.g. issues with cmake for me - updating to a recent version (last year) would resolve the issue.

I haven't checked whether there are breaking changes ...

Neutral diffusion doesn't count charge exchange

Neutral diffusion (neutral_parallel_diffusion.cxx) depends on the collisionality:

auto nu = get<Field3D>(species["collision_frequency"]);
// Diffusion coefficient
Field3D Dn = dneut * Tn / (AA * nu);

Currently species["collision_frequency"] appears to only have contributions from the Collisions component, which in the case of neutrals can contribute collision frequency of n-n, n-i and n-e collisions.

However, we also have charge exchange (CX) which is calculated using an AMJUEL rate in hydrogen_charge_exchange.cxx. It does not contribute to species["collision_frequency"]. I don't think this is correct - charge exchange should dominate neutral collisions in detached conditions.

I think that the AMJUEL rate already accounts for n-i collisions that don't result in a CX reaction... so maybe we could assume that if a CX reaction is enabled for a particular pair of species, then it is added to species["collision_frequency"] and any n-i collisions for that pair are disabled in the Collisions component. Alternatively that could be left to the user. What do you think?

Implement proper handling of non-orthogonal grids

Captured from Slack chat by Ben Dudson and John Omotani.

From Ben:

The places where these non-orthogonal terms are important are:

So we'd need to make sure that those three operators, FV::Div_f_v Div_n_bxGrad_f_B_XPPM and FV::Div_a_Laplace_perp include off-diagonal terms coming from X and Y being non-orthogonal. Those would (I think) appear as non-zero g12 or g_12 terms.

FV::Div_f_v calculates the expression Div( A ) = 1/J * d/di ( J * A^i ) and so (I think) includes all the terms needed

FV::Div_a_Laplace_perp (https://github.com/boutproject/BOUT-dev/blob/master/src/mesh/fv_ops.cxx#L12) has g23 terms but not g12, so I think is missing those terms

Same for Div_n_bxGrad_f_B_XPPM : g23 terms are there but not g12 terms

Conclusion : Hermes-3 does not include all terms needed for non-orthogonal grids in the poloidal plane. Neither did SOLPS before 2019, but Hermes-3 is using fluid neutrals which are probably more sensitive to this approximation than monte-carlo neutrals.
it might not be so straightforward: In 3D the shifted X and Y derivatives don't commute, and this becomes particularly obvious around the X-point branch cut. Needs a bit of thinking about... This is not an issue in 2D axisymmetric simulations, but those operators need to work in 3D too

From John: I think it should work in 3D - we thought about and implemented support for corner guard cells here boutproject/BOUT-dev#1885

Slowdowns when using fixed_fraction_neon

I've noticed large slowdowns when running simulation with fixed_fraction_neon. There's an example below, which is adapted from the 1D-recycling example with added fixed_fraction_carbon and fixed_fraction_neon. I've run the same case with only fixed_fraction_carbon and saw no slowdown.

I'm running with Hermes-3 commit 0af0890 built with BOUT++ commit 17f728f.

2023-06-02_1_density_evolution

2023-06-02_1_timer

Steady state is reached at around timestep 70, but at timestep 127 the code starts taking 1 or 2 orders of magnitude longer. Because this happens after reaching steady state it doesn't affect any results or physics, but it could be a problem in other cases where the slowdown starts happening before reaching steady state.

Input file:

nout = 400     # number of output time-steps
timestep = 5000.  # time between outputs

MXG = 0    # No guard cells needed in X



[mesh]

nx = 1
ny = 400    # Resolution along field-line
nz = 1

length = 25        # Length of the domain in meters
length_xpt = 12.5  # Length from midplane to X-point [m]

dymin = 0.1  # Minimum grid spacing near target, as fraction of average. Must be > 0 and < 1

dy = (length / ny) * (1 + (1-dymin)*(1-y/pi))   # Parallel grid spacing [m]

area_expansion = 1 # Expansion factor Area(target) / Area(midplane)
J = 1 + (mesh:area_expansion - 1) * H(y - mesh:y_xpt)*(y - mesh:y_xpt)/(2*pi - mesh:y_xpt)

# Calculate where the source ends in grid index
source = length_xpt / length
y_xpt = pi * ( 2 - dymin - sqrt( (2-dymin)^2 - 4*(1-dymin)*source ) ) / (1 - dymin)

ixseps1 = -1   # Branch-cut indices, specifying that
ixseps2 = -1   # the grid is in the SOL



[hermes]

# Evolve ion density, ion and electron pressure, then calculate force on ions due
# to electron pressure by using electron force balance.
components = (d+, d, e,
              sheath_boundary, collisions, recycling, reactions,
              electron_force_balance, neutral_parallel_diffusion,
              c, ne)

loadmetric = false
normalise_metric = true

Nnorm = 1e19
Bnorm = 1
Tnorm = 100



[solver]

type = beuler
use_precon=false  # False with beuler, True with cvode
snes_type = newtonls
ksp_type = gmres
max_nonlinear_iterations = 10

atol = 1e-7
rtol = 1e-5



[sheath_boundary]

lower_y = false
upper_y = true



[neutral_parallel_diffusion]

dneut = 10  # (B / Bpol)^2 in neutral diffusion terms



# Ions
# ----

[d+]  # Deuterium

type = (evolve_density, evolve_pressure, evolve_momentum,
        noflow_boundary, upstream_density_feedback)

noflow_lower_y = true
noflow_upper_y = false

charge = 1
AA = 2

density_upstream = 1e19
density_source_positive = false
density_controller_i = 5e-4
density_controller_p = 5e2

thermal_conduction = true

diagnose = true

recycle_as = d
recycle_multiplier = 1



[Nd+]

function = 1

source_shape = H(mesh:y_xpt - y) * 1e20



[Pd+]

function = 1

powerflux = 2e7
source = (powerflux*2/3 / (mesh:length_xpt)) * H(mesh:y_xpt - y)  # Input power as function of y



[NVd+]

function = 0



# Neutrals
# --------

[d]  # Deuterium

type = (evolve_density, evolve_pressure, evolve_momentum,
        noflow_boundary)

charge = 0
AA = 2

thermal_conduction = true

diagnose = true



[Nd]

function = 0.001



[Pd]

function = 0.0001



# Electrons
# ---------

[e]

type = quasineutral, evolve_pressure, zero_current, noflow_boundary

noflow_upper_y = false

charge = -1
AA = 1/1836

thermal_conduction = true

diagnose = true



[Pe]

function = `Pd+:function`

source = `Pd+:source`



# Carbon
# ------

[c]

type = fixed_fraction_carbon
fraction = 0.05  # 5% of electron density
diagnose = true  # To save radiation



# Neon
# ----

[ne]

type = fixed_fraction_neon
fraction = 0.05  # 5% of electron density
diagnose = true  # To save radiation



# Reactions and recycling
# -----------------------

[recycling]

species = d+



[reactions]

type = (
        d + e -> d+ + 2e,  # Deuterium ionisation
        d+ + e -> d,       # Deuterium recombination
        d + d+ -> d+ + d,  # Charge exchange
       )

diagnose = true

PI controller for 2D/3D simulations

For a valid comparison between models and simulations, you often tune the recycling fraction (or pump albedo) to a given upstream separatrix density. With my current ST40 cases it can take 12 hours to know where one particular recycling fraction setting will take it and it can take many steps to get to the desired number (let's say about 5 on average). This gets very challenging for high densities where the recycling fraction becomes a lot more sensitive. In practice it means that one simulation takes several days and a lot of user input to get right.

I initially thought that the controller would be too tricky to tune especially considering the nonlinearity of the effect of the recycling fraction on the particle balance, but @bshanahan mentioned that something like this was implemented in Hermes-2.

@bendudson @hahahasan do you have any experiences from Hermes-2 on this? I see two main challenges: it takes a long time for a change in recycling fraction to have an effect, and the impact of the fraction will be highly nonlinear since the domain particle count goes to infinity as the fraction approaches 100%.

Simple_conduction crashes/extreme poor performance

SOLEDGE2D has a simple conduction model where only ee and ii collisions are used for the electron and ion conductivity, respectively. Hermes-3 includes all of the collisions of a given species by default when the thermal_conduction flag is enabled in evolve_pressure. This includes charge exchange as well as any collisions enabled in the collisions component (by default, it has ee, ii, ie, nn, en and in enabled, so all of them).

I have disabled the flag thermal_conduction and added a new species-level component simple_conduction for both the ions and the electrons. This outright crashed my full and low res cases (CVODE -4 flag, meaning solver failure).

I tested it in 1D-recycling, and I did get it to run. However, I am seeing a slowdown of ~200x.
I've only got three timesteps so far, and each one has taken longer than the previous one. The results appear to move in the right direction, i.e. improve conduction due to a reduction in collisions considered, both in the ions and the electrons:
image

However, upon inspecting the target I found a serious anomaly in the electron pressure:
image

This is also visible in the same way in the ion pressure as well as ion momentum and points to something being wrong in the boundary.

Here is the input file I used (just 1D-recycling with the above changes):
input_file.zip

1D neutral density positivity scheme not robust

I am running 1D STEP transient pulse cases and finding that neutral density goes very negative if a sufficiently large plasma wave hits the upstream boundary. Starting the case from scratch with a 2% Argon fraction and 6.5e19m-3 density is sufficient for this to happen. Reducing the initial Argon fraction to 1% prevents it from happening.

We have a very negative Nd but Pd stays positive. Timesteps with negative Nd are extremely slow (10000x slower?) to the point of being unworkable.

After some digging I think it may be because our 1D positivity scheme is not the same as the 2D one (which we have been testing quite a bit and which should be very robust).

The neutral_mixed component is our neutral model in 2D/3D. In 1D neutral transport is controlled by neutral_parallel_diffusion which comes from SD1D.

neutral_parallel_diffusion only floors pressure:
https://github.com/bendudson/hermes-3/blob/0af08904f5730aaef637e41da6c70fdf60778c46/src/neutral_parallel_diffusion.cxx#L34C1-L36

While neutral_mixed floors density, calculates pressure from it and then does another floor:
https://github.com/bendudson/hermes-3/blob/0af08904f5730aaef637e41da6c70fdf60778c46/src/neutral_mixed.cxx#L118C1-L130

I am not sure if this is the cause, but making the scheme consistent is probably a good first step.
Below is a picture with the symptoms showing the final few timesteps at the upstream boundary. You can see a wave approaching the boundary and breaking, causing negative Nd in the final 2 timesteps. I have also observed this occurring on the way to the midplane, which I think rules out the boundary as a cause. I suspect it's just that the solver is struggling with the coarse grid there.

wavebreak-negativeNd

Franck-Condon Energy Missing

In SD1D recycled neutrals start with 3.5eV, the Franck-Condon energy.
I cannot find any neutral energy source term linked to recycled neutrals, which means that they start with zero energy... unless I've missed something obvious!

1D-recycling example crashes when run in parallel

Ran 1D-recycling with the default settings:

[solver]
type = beuler # Backward Euler steady-state solver
snes_type = newtonls
ksp_type = gmres
max_nonlinear_iterations = 10

This crashes after 364 out of the 400 iterations when running in parallel on 10 cores. There is a standard error message:
Error encountered in solver run
Solver failed after many attempts

When Hypre is enabled:

[solver]
type = beuler # Backward Euler steady-state solver
snes_type = newtonls
ksp_type = gmres
max_nonlinear_iterations = 10
pc_type = hypre

The case crashes after only 8 iterations with the same error.
It runs fine with beuler on 1 core (default settings).

Thanks @tbody-cfs for flagging this up.
@bendudson have you done much testing of the example cases with Hypre? I have almost never run with it myself.

Clean up docs/ ?

There is a file docs/manual/hermes-2-manual.tex which does not appear to be actually used to build the manual, but is potentially confusing for someone trying to update the docs! Should it be removed?

1D: Better flux expansion inputs/documentation/examples

In SD1D we were able to define the flux expansion by setting the area parameter in the input file:

https://github.com/boutproject/SD1D/blob/dc6832feaffb8852662aad08026d7850356d531f/sd1d.cxx#L262-L265

In Hermes-3 we don't have this implemented, but the same result can be obtained by directly setting J in the input file and using the same expression as in SD1D. I think to make it a bit clearer we could implement the area input as in SD1D, or alternatively write a short section in the documentation on how to achieve flux expansion in 1D (potentially the latter is best). I think we could also use some example cases and tests that feature flux expansion.

Neutral pump source

Need to implement a neutral pump component to allow the draining of neutrals in a more realistic fashion. The initial implementation will likely be a regular particle sink in a designated row of edge cells (possibly by passing an array to the grid file). A better implementation could spread this in a gaussian fashion over more cells. There are probably pitfalls here that risk causing numerical issues.

Confusing X guard cell values for Pn, Nn, Tn

I've been struggling with calculating perpendicular neutral particle and heat transport - I've been getting extreme values in the final radial fluid cells. I have now figured out what's wrong: the X guard cell values for Pn, Nn and Tn are zero.

If I manually set their value in the guard cells to the final radial cell value, my neutral transport calculation is correct, which suggests that this is what is happening inside the code, and the guard cells are set to zero at some other stage. I have not been able to find anything in the code that would do this though.

One thing I've realised is that I am not setting any boundary condition for Nn, Pn or Tn in the input file, and so they are set to dirichlet(0) by default. This is very confusing because neither of the above behaviours (guard cell = 0, or guard cell = final cell) corresponds to a dirichlet(0)!

neutral_mixed.cxx explicitly sets a dirichlet(0) condition on Dnn and I can confirm that the Dnn guard cell values do indeed correspond to this as opposed to Nn, Pn and Tn.

Improving the perpendicular neutral flux limiter

The neutral flux limiter currently limits the flux with a simple ceiling:

if (flux_limit > 0.0) {
// Apply flux limit to diffusion,
// using the local thermal speed and pressure gradient magnitude
Field3D Dmax = flux_limit * sqrt(Tn / AA) /
(abs(Grad(logPnlim)) + 1. / neutral_lmax);
BOUT_FOR(i, Dmax.getRegion("RGN_NOBNDRY")) {
Dnn[i] = BOUTMIN(Dnn[i], Dmax[i]);
}
}

It would be better if instead it asymptotically approached the limit, just like in the electron conduction flux limiter:

// This results in a harmonic average of the heat fluxes
kappa_par = kappa_par / (1. + abs(q_SH / floor(q_fl, 1e-10)));

This would probably make it easier on the numerics.
Thanks @bendudson for the idea.

Additional diagnostics

Summarising outstanding additional diagnostics to be implemented in a single issue for the sake of documentation. The anomalous diffusion coefficients are needed because they are often implemented as profiles when looking at tokamak cases. The collisions are needed to both allow a complete momentum and energy balance per species, and to enhance debugging of any other components that depend on collisionality, such as neutral diffusion.

  • Individual collisionality contributions of each [collisions] reaction and any others such as CX
  • Energy and momentum transfer channel variables for each collision
  • Anomalous diffusion coefficients (D, Chi, Nu) Completed

Sporadic extreme slowdowns in deep detachment after settling for very long time

Thanks to Matt Khan for spotting this in his simulations and doing the initial debugging.

Our 1D STEP simulations in deep detachment (10m from target) slow down substantially upon front movement when very near steady state. This seems to appear only when the simulation has run for hundreds of ms to the point where each front movement is 100s of ms apart, and does not happen earlier in the simulation where the front easily jumps from cell to cell in the same domain region without issue.

Here is a figure showing the wall time spikes in blue, momentum spikes in green and front position in red:
image

The symptom is an extremely long iteration (10hrs vs. typical 3min on 40 cores) just before front movement.

Here is a decomposition of ddt() components around the time of the spike. The front movement happens in the final spike in residuals, but the wall time spike happens just before. There doesn't appear to be any helpful pattern to this:
image

I have plotted profiles at several timeslices and found nothing special about this particular front jump:
image
image
image

I have also plotted the domain integral histories of the atomic and impurity radiation reactions. It tells us nothing new apart from that the hydrogenic reactions are changing rapidly during the slowdown (but almost as rapidly before it and more rapidly afterwards). Reaction rates are highly nonlinear but do not feature a preconditioner, so they could cause poor performance. However, they move a lot every time the front jumps, and there is no issue in the simulation until far into the steady state.
image

Diagnosing CVODE shows a spike in the number of fails and a drop in the linear to nonlinear iteration ratio. This (I think) means that nonlinear iterations keep failing which makes CVODE reduce timestep to extremely low levels resulting in the very long iteration. The order drops to 1 so this isn't connected with CVODE's sporadic hogging of higher orders at low timesteps. I am not sure if there's anything in here that would warrant a change in solver settings.

image

I have done a test where I restarted the simulation from just before the spike to see if the issue is caused by the CVODE algorithm tuning its settings for the long quiescent period and then being "surprised" by the spike (thanks for the idea @bendudson). Unfortunately the spike happens in the same way as before. The next step is to significantly reduce timestep size and rerun the simulation to get a better handle of what dynamics are connected with the spike (current timestep is 5ms).

Impurity transport: AMJUEL He and ADAS C, Ne reactions don't register collision frequencies

A species' collision frequency is used to calculate its conductivity, viscosity and (if it's a neutral) diffusive transport. These come from collisions.cxx and from atomic reactions of ionisation (IZ) and charge exchange (CX). In hydrogenic plasmas CX and IZ dominate neutral collisionality and are required for neutral transport to work well.

It looks like He as well as the ADAS reactions for C and Ne don't register their IZ or CX collision frequencies. For example, I would expect ADAS charge exchange to save its collision frequency in this block:

// from_A -> to_A
{
// Particles
subtract(from_A["density_source"], reaction_rate);
add(to_A["density_source"], reaction_rate);
// Momentum
const Field3D momentum_exchange =
reaction_rate * get<BoutReal>(from_A["AA"]) * get<Field3D>(from_A["velocity"]);
subtract(from_A["momentum_source"], momentum_exchange);
add(to_A["momentum_source"], momentum_exchange);
// Energy
const Field3D energy_exchange =
reaction_rate * (3. / 2) * GET_VALUE(Field3D, from_A["temperature"]);
subtract(from_A["energy_source"], energy_exchange);
add(to_A["energy_source"], energy_exchange);

Is this on purpose because most of these impurity species are ionised and their collisionality is dominated by ei or ii collisions?
I am concerned that while this could be fine for the ions, the neutral species could be seeing extremely low collision frequencies, which would lead to an over-prediction of diffusive neutral transport. This would be particularly impactful in 1D where the diffusive transport is still without a flux limiter.

I wonder whether resolving this could alleviate the performance issues when running impurity transport.
I am currently working on PR #195 reworking the way the frequencies are saved into the state, so it's an opportunity to get this resolved following discussion @bendudson.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.