svalinn / r2s-act Goto Github PK
View Code? Open in Web Editor NEWRigorous 2 Step Activation Workflow
Rigorous 2 Step Activation Workflow
In addition to its primary duties, the write_gammas.py script calculates the total photon source strength of a problem, and tags this to the mesh. This feature should be preserved in the case where MCNP reads the MOAB mesh directly.
Some of the tests (e.g. those for write_gammas.py
) use input files, generate output, and compare that output with saved expected output. In my case I have a script that regenerates these 'reference files' as needed (in this case, I added the headers to the gammas
file, and need to update the test files_). The files/scripts for *regenerating_ the reference files live locally in my cae-cnerg space.
My current working idea is to put these inputs/scripts/etc in a separate branch on Github with suitable readme instructions.
Similar to documentation with gh-pages, a script could copy stuff between branches. However, it might be better not to do this - to make copying the updated reference files a manual process. This would put conscious thought into the process, which I think is particularly important when updating the reference files.
Any objections/other ideas?
*: Updating the reference files is determined to be safe because separate manual testing indicates that modifications to source_gammas.F90
correctly handles the header in gammas
.
Edited to clarify confusing statement noted by @elliottbiondo
The gammas
file currently serves in part to toggle options such as the sampling approach to use, void rejection approach, and debugging. The other purpose of this file is to store photon source information, but this is being removed in favor of reading the MOAB mesh directly. The 'gammas' file can probably be removed completely, in favor of the mesh.
However, the parameters on the mesh must be easy to check and easy to change. A tool(s) for this probably doesn't exist currently. A possible form of such a tool would be like the install
script that is included with DAGMCNP.
Now that ADVANTG is available under a beta release, we should modify it to use CAD-generated mesh for the Denovo step.
This will have a number of issues that derive from this:
#2
Currently am using the Fortran iTaps API to do this. All hurdles in implementing this have been taken care of, and am currently working on smaller implementation details/testing
Code for tets/hexahedra:
The iTaps API is being implemented in source.F90
. Currently I preserver the gammas
file approach in source_gamma.F90
, and am working within source_moab.F90
. There are three phases to this:
source_moab.F90
independently from MCNP
gfortran [source.F90] [object.o files] -fcray-pointer -I/filespace/groups/cnerg/
opt/MOAB/opt-cubit-c12/include -lstdc++ -liMesh -L/filespace/groups/cnerg/opt/MO
AB/opt-nocgm/lib -lMOAB -lhdf5 -lnetcdf -g
source_moab.F90
iMesh_f.h
multiple time.Currently, the write_gammas.py
script calculates and applies a normalization factor to the source information from phtn_src
(which is first tagged to the mesh, e.g. for visualization). With the implementation of reading the MOAB mesh directly into source.F90
, this code needs to be adapted from Python to Fortran, and then verified.
It seems that when using mats2ALARA.py when the script encounters a material of the form
M1 1001.21c 0.3
2004.21c 0.2
c 3006.21c 0.3
2005.21c 0.5
The script fails to read the entries after the comment line. Dangerous!
Looking at the existing AVANTG code, identify how the material information is defined on the grid. Suggest modifications/extensions to the current CNERG methodologies to accommodate this for CAD-based grids.
Supports #1
Design & implement modifications to PyNE to handle materials in a way that is suitable for our R2S workflow.
My experience over the past few months has been that I (almost) always want to set up more than one photon transport problem (e.g. looking at the dose after several different cooling times). This involves the setup of multiple directories - e.g. modifying and copying r2s.cfg files, copying photon inputs, running r2s_step2.py several times. At some point, automating this would be logical.
To do this, I am thinking of an optional script akin to r2s_step1/2.py. This script would be optionally run by r2s_step1.py, or could be run separately*. Script will take (minimal) input from the r2s.cfg file.
For now I'll go with the script name r2s_step2setup.py. (Not a great name, naming is the hardest problem in computer science, right?)
Anticipated features of r2s_step2setup.py:
Anticipated modifications to the r2s.cfg file:
Anticipated modifications to r2s_step1.py:
_case: user starts off with some cooling steps of interest, and later wants to investigate specific isotopes.
*_This has to be done sequentially due to accessing data in .h5m, I think.
I pulled the newest code for TAE and r2s_step1 failed with insufficient arguments for read_meshtal.
We need tests to catch things like this.
Please correct me if this is already included, but its quite painful to have to pull out the materials from the .sat/.h5m geometry file to the matlib file. Can we not populate mat_rho- by querying the .sat/.h5m (I would prefer *.h5m) directly?
Void rejection has the potential to be inefficient if a problem has numerous voxels that are mostly void. Resampling within such voxels would generally require many tries before hitting a non-void location.
Thus, the void rejection capability probably ought to be optional, in lieu of a workaround for 'poorly formed' voxels. This will apply to both the uniform and voxel sampling techniques.
Modify R2S workflow for unstructured mesh capability. This will greatly simplify the material definition part of the workflow, at the expense of requiring every region to be meshed with a tet-mesh.
This may require some refactoring of read_meshtal.py.
A script should be written to read in WWINP files and write out a structured mesh tagged with weight windows.
This will be useful when presenting results on weight windows used for any given problem.
I have spoken to Ahmad and Stuart about this and it does not sound like an open source version of this exists.
Derived from #3. Since the workflow is functional, the following can be classified as a set of related enhancements to the workflow.
mmGridGen/mmgrid.py #38
Additions to source.F90
to read MOAB mesh directly:
r2s.cfg
by source.F90
Compiling DAGMCNP with iMesh API calls in Fortran #44, svalinn/DAGMC#48
-fcray-pointer -I/filespace/groups/cnerg/opt/MOAB/opt-cubit-c12/include -liMesh -L/filespace/groups/cnerg/opt/MOAB/opt-nocgm/lib
-lstdc++ -lMOAB -lhdf5 -lnetcdf
Need inspiration for the best way to do this in Fortran+MCNP.
This is needed for both ALARA, and for calculating the source normalization factor prior to photon sampling.
For arbitrary tetrahedra, I am currently using the method from Wikipedia: http://en.wikipedia.org/wiki/Tetrahedron#Volume. One note is that the abs()
function may be unnecessary, depending on the vertex ordering that MOAB uses?
For arbitrary hexahedrons, the method described by Grandy in 'Efficient Computation of Volume of Hexahedrons' (also in Zotero) could be used, with adjustments made for the different vertex ordering used in MOAB. (this is for hexahedra with non-planar facets)
MOAB/mbsize already do similar volume calculations, so their methods should be investigated as well.
As a precursor to running ALARA in the unstructured mesh workflow, the material fractions of each tetrahedron are needed.
Currently:
Am I correct that uniformly sampling a tet with rays is an open research question? Maybe something for an NE506 project? But might also need to be done sooner than that...
Derived from #3.
In the case where the workflow is being done with native MCNP geometry and an unstructured mesh tally, r2s_step1.py does not tag the mesh with materials, and subsequently throws an error.
When refactoring r2s_step1, the unofficial test cases used (1) unstructured mesh with dagmc model tagged in cubit; and (2) structured mesh with materials via mmgrid.py; Thus the above case was missed.
This should be tested for, as well.
r2s_step2setup.py should either write a script/or there should be another python script r2s_step2_batch.py (or whatever) which should perform the r2s_step2.py calculation in each cooling step subdirectory.
In the ideal world I would like to see some summary of the total photon source rate for each cooling step at the end of this processing step.
The r2s_step1.py
and r2s_step2.py
scripts in particular need tests.
Some of the other scripts at the top level (r2s_setup.py
and r2s_step2setup.py
) also need tests.
I was looking at r2s_step2setup.py
with this in mind, and ended up refactoring it into a handful of methods so that it's more testable. The same would probably be appropriate for the other scripts.
Create a sufficient initial set of documentation for the R2S workflow. Ideally, a beginner should be able to get the R2S workflow working, even if it takes a lot of finicky steps
I have been taking the approach of supporting mixed tet/hex meshes (i.e. for #3). After not identifying any methods for sampling general hexahedra, I believe the best approach is therefore to assume any given hexahedron is a parallelepiped (and probably a right angle-only parallelepiped...).
Sampling a parallelepiped is a straightforward algorithm. For points a through h, with MOAB's vertice ordering:
! Get 3 edge vectors that begin at point 'a'
v1 = rand() * (b - a)
v2 = rand() * (e - a)
v3 = rand() * (d - a)
! Sample random point within the voxel
xxx = a(1) + v1(1) + v2(1) + v3(1)
yyy = a(2) + v1(2) + v2(2) + v3(2)
zzz = a(3) + v1(3) + v2(3) + v3(3)
Where a(1)=a_x, a(2)=a_y, etc.
Add void to pseudovoid .... resolved
In addition to knowing the material, it can be valuable to know which cells are in each voxel.
This includes design decisions such as:
Make the following changes:
The current version of mats2ALARA assumes that all materials are defined in MCNP with a single isotope per line. MCNP allows more flexibility. We should probably support this. The addition of MCNP material reader to PyNE may supersede this issue.
The current implementation of alias table creation in source_gamma_refactor.F90
exhibits O(n^2) behavior. In theory, this should only be O(n). Testing indicates the culprit is the subroutine sort_for_alias_table
, and it uses >90% of the CPU time. This subroutine gets called n
times, where n
is the number of discrete bins from which we are creating the alias table.
The subroutine is reproduced here; Time usage by the two parts of this method are noted with comments:
subroutine sort_for_alias_table(bins, length)
! subroutine locates where to move the last bin in bins to,
! such that bins is presumably completely sorted again.
use mcnp_global
integer,intent(IN) :: length
real(dknd),intent(INOUT),dimension(1:length,1:2) :: bins
! Method's variables
integer :: cnt, i
real(dknd),dimension(1,1:2) :: temp
if (bins(length,1).lt.bins(length-1,1)) then
!!!! This do loop is ~ 1/3 of the CPU time used
! The logic in this do loop may be problematic at
! cnt = length or cnt = 1...
do cnt=length,1,-1
if (bins(length,1).ge.bins(cnt-1,1)) exit
enddo
! found bin
!!!! The array shuffling is ~2/3 of the CPU time used
temp(1,1:2) = bins(length,1:2)
bins(cnt+1:length,1:2) = bins(cnt:length-1,1:2)
bins(cnt,1:2) = temp(1,1:2)
else
continue
endif
end subroutine sort_for_alias_table
I suspect this may be due to poor use of arrays, as suggested by http://jblevins.org/log/efficient-code - in particular, how Fortran stores 2D arrays. I'll experiment with modifying the arrays within the alias table creation code at some point... Converting the entire source.F90 would be a pain, though.
We really need the facility to collapse down the inventory data at the end of the calculation into single file, for example take the voxel wise data and map this back into a file, thus listing the inventory by region.
Currently, uniform sampling does a linear search in each dimension (xyz) to locate which mesh boundaries a sampled point is within. This can be sped up using a binary search implemented in source.F90
.
Just to note an observation, the linear search method, assuming the number of indices in each dimension is roughly equal, is O(3·n^(1/3)), where n is the number of voxels. The binary search method will be O(3·log(n^(1/3))). (The 3· is typically ignored of course...) This improvement will be much smaller than the O(n) to O(1) result of implementing alias sampling.
The repository needs to be cleaned up so that it is organized in a logical way for outside users and uncluttered by a history of experimental files. There are 2 possible approaches:
The current R2S suite does not include any scripts for easily getting values off mesh. Mesh points may not conform well the problem geometry in all cases, so at times users may want a value from the mesh at some point between multiple mesh points. A script should be written to perform 3d linear/logarithmic interpolation on MOAB meshes and print out the interpolated value and propagated uncertainty.
As a first draft I will commit a script that prints out values/errors for a specified voxel, which probably covers a lot of use cases.
Void rejection can currently be done by resampling within the the selected voxel until a non-void point is chosen. This is prone to low efficiency with voxels containing a sliver of non-void.
(See previous notes in #13, #19, #20)
Relevant ideas/thoughts from @gonuke::
What could be most valuable, is to know which DAGMCNP cells are in each voxel. For the source sampling phase, this may be more important than the material volume fractions since it can limit the number of very expensive point-in-volume tests you need to do.
Another little implementation details is that it might be best to check the implicit complement last, as it is often the most complicated, and often void. It might be possible to check all the NON-implicit complement first and then deem it in the implicit complement if not in the others - and never actually test the implicit complement....
and
For example, if we measure the rejection efficiency to be too low, instead of sampling uniformly in 3D, we could sample uniformly in 2D on one voxel face, and fire a ray to measure the non-void length(s) in the voxel, and sample within the non-void length of that ray. (hmmm - that's not quite a fair game.... but something like that.)
It seems as though mcnp_n2p is not working correctly for input files for DAGMC. I notice that you only test mcnp_n2p for a full native MCNP file. There should be a test for a DAG-MCNP file.
Inputs:
Outputs
Synopsis
The program will read all of the groups off an h5m geometry file. The user will then assign material definitions to each group. The actual definitions will be found in a separate material library in some canonical format. When assigning these materials the user will be able to:
The user can then output definitions in some format that can be read-in by other codes.
At first, this program will get user input from an input file. Eventually this program may also have a GUI. The GUI should write "journal files" similar to CubIT which contain valid input text.
Most of the material handing will be done using PyNE functions.
I have added support* for comment lines (using #) at the top of the 'gammas' file, but have not implemented automatically creating this comment yet. The basis for doing this is to supply a bit of extra information for when someone looks at the gammas file at some later date (Andy's idea, back in September, I think). The contents I have in mind for this comment/header line are:
The problem title would be analogous to MCNP's title card, and would be specified as a new parameter in the r2s.cfg file.
Is the problem title unnecessary? Do we want this concept added to other files in the workflow? (e.g. alara_input, phtn_src)
*support, as in, source.F90
simply skips over these lines
Initially we need to understand how this information exists within ADVANTG.
In theory, we just need the bias value for each voxel, stored in the 'PHTN_BIAS' tag, and our modified MCNP will pull the necessary bias values for photon transport.
A common use case is to rerun the R2S methodology starting with neutronics but without changing the geometry/materials. Therefore it is useful to be able to repeat the workflow without doing the ray firing for the mmGridGen step.
Previously had thought that void fraction information for each voxel would be needed by MCNP in order to do void rejection with uniform sampling for photon transport... On pondering this, I believe, and think I have shown, that void rejection is easier to implement.
If MCNP finds that a particle starts in a void, we can resample within the same voxel (rather than resample entire problem) until a valid position is found.
I'm almost certain this is correct, and easily compatible with the current implementation (i.e. the 'gammas' file format for uniform sampling), but need to ponder/discuss, implement and test this. This is basically the same solution that I applied to void rejection when voxel sampling a few weeks ago.
The structured/unstructured workflows will probably coexist as options
When we pass a meshtally file to r2s-act that has only 1 (or none) we get the error 'could not find header in the first 100 lines', I suggest we do some kind of check when reading this to see if we found the energy bin line and from this deduce the number of groups. If this is less than 175 (or another predetermined limit) we should provide a more useful error message.
Relevant existing code:
r2s/io/write_alara.py
meshtools/tetmesh2points/tet_vals_and_centroids.py
The uniform sampling approach described in Damien's Master's thesis is incorrect
Sampling a square that is 4x4 (x:[0,4]; y:[0,4]), and sampling 3 points in each direction.
You get points at 1, 2, 3 in each dimension (with the incorrect algorithm).
Each point 'corresponds' to a 1x1 square about it
Result is squares only cover the extents 0.5 to 3.5
Essentially, the edges of voxels are not being sampled.
The current implementation, via _linspace()
method in mmgrid.py
also uses this erroneous algorithm. (Though by default, mmgrid.py
uses random point sampling)
Todo:
Currently, the custom source.F90 sampling routine can reject particles sampled in void (use case: partially void voxels) (used via the m option in gammas file's parameter line), though this is untested. In order to support this, the normalization done when creating the gammas file needs to be updated. In particular, the correct normalization requires knowing the fraction of void volume in each voxel. Thus:
This may be rendered unnecessary by #3. Though, perhaps we also aim to support structured mesh workflows alongside unstructured mesh workflows.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.