Git Product home page Git Product logo

r2s-act's People

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

r2s-act's Issues

Tag total photon source strength to mesh (redo in source.F90)

In addition to its primary duties, the write_gammas.py script calculates the total photon source strength of a problem, and tags this to the mesh. This feature should be preserved in the case where MCNP reads the MOAB mesh directly.

  • Tag and save value to mesh

Unit tests' comparison file generation

Some of the tests (e.g. those for write_gammas.py) use input files, generate output, and compare that output with saved expected output. In my case I have a script that regenerates these 'reference files' as needed (in this case, I added the headers to the gammas file, and need to update the test files_). The files/scripts for *regenerating_ the reference files live locally in my cae-cnerg space.

My current working idea is to put these inputs/scripts/etc in a separate branch on Github with suitable readme instructions.

Similar to documentation with gh-pages, a script could copy stuff between branches. However, it might be better not to do this - to make copying the updated reference files a manual process. This would put conscious thought into the process, which I think is particularly important when updating the reference files.

Any objections/other ideas?

*: Updating the reference files is determined to be safe because separate manual testing indicates that modifications to source_gammas.F90 correctly handles the header in gammas.

Edited to clarify confusing statement noted by @elliottbiondo

Photon Source Sampling: Parameters when not using 'gammas' approach

The gammas file currently serves in part to toggle options such as the sampling approach to use, void rejection approach, and debugging. The other purpose of this file is to store photon source information, but this is being removed in favor of reading the MOAB mesh directly. The 'gammas' file can probably be removed completely, in favor of the mesh.

However, the parameters on the mesh must be easy to check and easy to change. A tool(s) for this probably doesn't exist currently. A possible form of such a tool would be like the install script that is included with DAGMCNP.

Adapt ADVANTG for DAGMC/CAD-based geometry

Now that ADVANTG is available under a beta release, we should modify it to use CAD-generated mesh for the Denovo step.

This will have a number of issues that derive from this:
#2

Read MOAB mesh directly with source.F90

Currently am using the Fortran iTaps API to do this. All hurdles in implementing this have been taken care of, and am currently working on smaller implementation details/testing

Code for tets/hexahedra:

  • TBD: how much of the code can be the same for both voxel types
  • Arbitrary hexahedra probably supported
  • Will support meshes that mix tets/hexs (though a use case doesn't really exist yet?)

Compiling with iTaps API in Fortran

The iTaps API is being implemented in source.F90. Currently I preserver the gammas file approach in source_gamma.F90, and am working within source_moab.F90. There are three phases to this:

  • Compile source_moab.F90 independently from MCNP
    • This uses the following gfortran call, with several libraries
gfortran [source.F90] [object.o files] -fcray-pointer -I/filespace/groups/cnerg/
opt/MOAB/opt-cubit-c12/include -lstdc++ -liMesh -L/filespace/groups/cnerg/opt/MO
AB/opt-nocgm/lib -lMOAB -lhdf5 -lnetcdf -g
  • Compile test code (unit tests) along with source_moab.F90
    • The main challenge here was to avoid multiple definitions arising from including iMesh_f.h multiple time.
  • Compile DAGMCNP with iTaps API correctly included/referenced
    • Will require some additions to (and understanding of) current process of building DAGMCNP

Re-implement Photon Source Normalization Factor in source.F90

Currently, the write_gammas.py script calculates and applies a normalization factor to the source information from phtn_src (which is first tagged to the mesh, e.g. for visualization). With the implementation of reading the MOAB mesh directly into source.F90, this code needs to be adapted from Python to Fortran, and then verified.

mats2ALARA.py bug

It seems that when using mats2ALARA.py when the script encounters a material of the form

M1 1001.21c 0.3
2004.21c 0.2
c 3006.21c 0.3
2005.21c 0.5

The script fails to read the entries after the comment line. Dangerous!

Identify ADVANTG input for material grid info

Looking at the existing AVANTG code, identify how the material information is defined on the grid. Suggest modifications/extensions to the current CNERG methodologies to accommodate this for CAD-based grids.

Supports #1

Script for setup of directories/files for multiple 'step 2' calculations

My experience over the past few months has been that I (almost) always want to set up more than one photon transport problem (e.g. looking at the dose after several different cooling times). This involves the setup of multiple directories - e.g. modifying and copying r2s.cfg files, copying photon inputs, running r2s_step2.py several times. At some point, automating this would be logical.

To do this, I am thinking of an optional script akin to r2s_step1/2.py. This script would be optionally run by r2s_step1.py, or could be run separately*. Script will take (minimal) input from the r2s.cfg file.

For now I'll go with the script name r2s_step2setup.py. (Not a great name, naming is the hardest problem in computer science, right?)

Anticipated features of r2s_step2setup.py:

  • Create directories for each case with proposed naming isotope___cooling_time, e.g. mn-56_1_d or TOTAL_5_h
  • Copy r2s.cfg to each new directory, and modify each to point to files (e.g. phtn_src, .h5m files) to parent directory, as well as set correct isotope/cooling time
  • Copy mcnp photon transport input to each directory (Modify title card?)
  • Create shell script in parent directory to run all r2s_step2.py cases.**

Anticipated modifications to the r2s.cfg file:

  • Option for running this script in step 1 (default false)
  • Need to list multiple cooling steps and/or isotopes. Need to investigate supplying lists to the configParser module.

Anticipated modifications to r2s_step1.py:

  • After all other steps, check r2s.cfg and run r2s_step2setup.py if requested

_case: user starts off with some cooling steps of interest, and later wants to investigate specific isotopes.
*_This has to be done sequentially due to accessing data in .h5m, I think.

Improvement to matlib creation

Please correct me if this is already included, but its quite painful to have to pull out the materials from the .sat/.h5m geometry file to the matlib file. Can we not populate mat_rho- by querying the .sat/.h5m (I would prefer *.h5m) directly?

Make void rejection optional for photon transport

Void rejection has the potential to be inefficient if a problem has numerous voxels that are mostly void. Resampling within such voxels would generally require many tries before hitting a non-void location.

Thus, the void rejection capability probably ought to be optional, in lieu of a workaround for 'poorly formed' voxels. This will apply to both the uniform and voxel sampling techniques.

Edit: See also #13, #19.

Add unstructured mesh capability for R2S workflow

Modify R2S workflow for unstructured mesh capability. This will greatly simplify the material definition part of the workflow, at the expense of requiring every region to be meshed with a tet-mesh.

Create WWINP to MOAB script

A script should be written to read in WWINP files and write out a structured mesh tagged with weight windows.

This will be useful when presenting results on weight windows used for any given problem.

I have spoken to Ahmad and Stuart about this and it does not sound like an open source version of this exists.

Expand/enhance unstructured mesh capability

Derived from #3. Since the workflow is functional, the following can be classified as a set of related enhancements to the workflow.

mmGridGen/mmgrid.py #38

  • Ray tracing of tet mesh for material fractions
  • Combine mmgrid.py and tag_vox_center_mat.py into a single script for both structured and unstructured meshes

Additions to source.F90 to read MOAB mesh directly:

  • Enable parsing of r2s.cfg by source.F90
    • Arbitrary input mesh names #41 (currently we default to source.h5m)
    • Replace passing parameters via gammas with using tags instead #45

Compiling DAGMCNP with iMesh API calls in Fortran #44, svalinn/DAGMC#48

  • Understand/implement changes to Makefile
    • gfortran call definitely requires adding -fcray-pointer -I/filespace/groups/cnerg/opt/MOAB/opt-cubit-c12/include -liMesh -L/filespace/groups/cnerg/opt/MOAB/opt-nocgm/lib
    • may also require -lstdc++ -lMOAB -lhdf5 -lnetcdf

Calculate volumes for arbitrary tetrahedra and hexahedra

This is needed for both ALARA, and for calculating the source normalization factor prior to photon sampling.

  • Implement for ALARA (R2S python script)
  • Implement for MCNP (Fortran)

For arbitrary tetrahedra, I am currently using the method from Wikipedia: http://en.wikipedia.org/wiki/Tetrahedron#Volume. One note is that the abs() function may be unnecessary, depending on the vertex ordering that MOAB uses?

For arbitrary hexahedrons, the method described by Grandy in 'Efficient Computation of Volume of Hexahedrons' (also in Zotero) could be used, with adjustments made for the different vertex ordering used in MOAB. (this is for hexahedra with non-planar facets)

MOAB/mbsize already do similar volume calculations, so their methods should be investigated as well.

Ray-tracing of Unstructured Meshes for Material Fractions

As a precursor to running ALARA in the unstructured mesh workflow, the material fractions of each tetrahedron are needed.

Currently:

  • mmGridGen/mmgrid.py do this ray-tracing for structured meshes
  • Ray-tracing on unstructured meshes is done by DAGMC.

Am I correct that uniformly sampling a tet with rays is an open research question? Maybe something for an NE506 project? But might also need to be done sooner than that...

Derived from #3.

Step 1 failure due to not tagging materials

In the case where the workflow is being done with native MCNP geometry and an unstructured mesh tally, r2s_step1.py does not tag the mesh with materials, and subsequently throws an error.

When refactoring r2s_step1, the unofficial test cases used (1) unstructured mesh with dagmc model tagged in cubit; and (2) structured mesh with materials via mmgrid.py; Thus the above case was missed.

This should be tested for, as well.

r2s_step2setup.py improvments

r2s_step2setup.py should either write a script/or there should be another python script r2s_step2_batch.py (or whatever) which should perform the r2s_step2.py calculation in each cooling step subdirectory.

In the ideal world I would like to see some summary of the total photon source rate for each cooling step at the end of this processing step.

Create tests for top level r2s scripts (and refactor as needed)

The r2s_step1.py and r2s_step2.py scripts in particular need tests.

Some of the other scripts at the top level (r2s_setup.py and r2s_step2setup.py) also need tests.

I was looking at r2s_step2setup.py with this in mind, and ended up refactoring it into a handful of methods so that it's more testable. The same would probably be appropriate for the other scripts.

  • r2s_setup.py
  • r2s_step1.py
  • r2s_step2.py
  • r2s_step2setup.py

First draft of R2S documentation

Create a sufficient initial set of documentation for the R2S workflow. Ideally, a beginner should be able to get the R2S workflow working, even if it takes a lot of finicky steps

Sampling of hexahedra (especially in mixed hex/tet mesh)

I have been taking the approach of supporting mixed tet/hex meshes (i.e. for #3). After not identifying any methods for sampling general hexahedra, I believe the best approach is therefore to assume any given hexahedron is a parallelepiped (and probably a right angle-only parallelepiped...).

Sampling a parallelepiped is a straightforward algorithm. For points a through h, with MOAB's vertice ordering:

        ! Get 3 edge vectors that begin at point 'a'
        v1 = rand() * (b - a)
        v2 = rand() * (e - a)
        v3 = rand() * (d - a)

        ! Sample random point within the voxel
        xxx = a(1) + v1(1) + v2(1) + v3(1)
        yyy = a(2) + v1(2) + v2(2) + v3(2)
        zzz = a(3) + v1(3) + v2(3) + v3(3)

Where a(1)=a_x, a(2)=a_y, etc.

Implement MCNP meshtal reader in PyNE

This includes design decisions such as:

  • what internal mesh representation to use (MOAB?)
  • whether to "stream" the meshtal or store the whole data file
  • which portions to write in a low-level language (C++?) for performance

MAGIC Fixes

Make the following changes:

  • add a "-e" flag to allow user specification of minimum error value to overwrite existing WW file.
  • allow null value to be applied at any iteration, not just the first.
  • improve doc strings.
  • add documentation.

Develop plan for R2S integration with PyNE

Some of the R2S workflow steps are capabilities are suitable for integration in PyNE as individual components. This should be planned carefully prior to implementation.

This includes subtasks:
#5
#6

More flexible handling of MCNP materials in mats2ALARA

The current version of mats2ALARA assumes that all materials are defined in MCNP with a single isotope per line. MCNP allows more flexibility. We should probably support this. The addition of MCNP material reader to PyNE may supersede this issue.

Fix O(n^2) behavior of alias table generation in source.F90

The current implementation of alias table creation in source_gamma_refactor.F90 exhibits O(n^2) behavior. In theory, this should only be O(n). Testing indicates the culprit is the subroutine sort_for_alias_table, and it uses >90% of the CPU time. This subroutine gets called n times, where n is the number of discrete bins from which we are creating the alias table.

The subroutine is reproduced here; Time usage by the two parts of this method are noted with comments:

subroutine sort_for_alias_table(bins, length)
! subroutine locates where to move the last bin in bins to,
! such that bins is presumably completely sorted again.
  use mcnp_global

        integer,intent(IN) :: length
        real(dknd),intent(INOUT),dimension(1:length,1:2) :: bins

        ! Method's variables
        integer :: cnt, i
        real(dknd),dimension(1,1:2) :: temp

        if (bins(length,1).lt.bins(length-1,1)) then


!!!! This do loop is ~ 1/3 of the CPU time used 
          ! The logic in this do loop may be problematic at 
          !  cnt = length or cnt = 1...
          do cnt=length,1,-1
            if (bins(length,1).ge.bins(cnt-1,1)) exit
          enddo
          ! found bin


!!!! The array shuffling is ~2/3 of the CPU time used
          temp(1,1:2) = bins(length,1:2)
          bins(cnt+1:length,1:2) = bins(cnt:length-1,1:2)
          bins(cnt,1:2) = temp(1,1:2)

        else
                continue
        endif

end subroutine sort_for_alias_table

I suspect this may be due to poor use of arrays, as suggested by http://jblevins.org/log/efficient-code - in particular, how Fortran stores 2D arrays. I'll experiment with modifying the arrays within the alias table creation code at some point... Converting the entire source.F90 would be a pain, though.

Improvement to post processing

We really need the facility to collapse down the inventory data at the end of the calculation into single file, for example take the voxel wise data and map this back into a file, thus listing the inventory by region.

Uniform sampling - use binary search of mesh boundaries

Currently, uniform sampling does a linear search in each dimension (xyz) to locate which mesh boundaries a sampled point is within. This can be sped up using a binary search implemented in source.F90.

Just to note an observation, the linear search method, assuming the number of indices in each dimension is roughly equal, is O(3·n^(1/3)), where n is the number of voxels. The binary search method will be O(3·log(n^(1/3))). (The 3· is typically ignored of course...) This improvement will be much smaller than the O(n) to O(1) result of implementing alias sampling.

Repository cleanup

The repository needs to be cleaned up so that it is organized in a logical way for outside users and uncluttered by a history of experimental files. There are 2 possible approaches:

  • start with the structure we have and move/rename/delete things
  • design the repository structure from scratch and place the important files into that structure

Create a script to interpolate between values on MOAB mesh

The current R2S suite does not include any scripts for easily getting values off mesh. Mesh points may not conform well the problem geometry in all cases, so at times users may want a value from the mesh at some point between multiple mesh points. A script should be written to perform 3d linear/logarithmic interpolation on MOAB meshes and print out the interpolated value and propagated uncertainty.

As a first draft I will commit a script that prints out values/errors for a specified voxel, which probably covers a lot of use cases.

Develop alternatives for cases with low efficiency resampling

Void rejection can currently be done by resampling within the the selected voxel until a non-void point is chosen. This is prone to low efficiency with voxels containing a sliver of non-void.

(See previous notes in #13, #19, #20)

Relevant ideas/thoughts from @gonuke::

What could be most valuable, is to know which DAGMCNP cells are in each voxel. For the source sampling phase, this may be more important than the material volume fractions since it can limit the number of very expensive point-in-volume tests you need to do.

Another little implementation details is that it might be best to check the implicit complement last, as it is often the most complicated, and often void. It might be possible to check all the NON-implicit complement first and then deem it in the implicit complement if not in the others - and never actually test the implicit complement....

and

For example, if we measure the rejection efficiency to be too low, instead of sampling uniformly in 3D, we could sample uniformly in 2D on one voxel face, and fire a ray to measure the non-void length(s) in the voxel, and sample within the non-void length of that ray. (hmmm - that's not quite a fair game.... but something like that.)

Test mcnp_n2p in dagmc mode

It seems as though mcnp_n2p is not working correctly for input files for DAGMC. I notice that you only test mcnp_n2p for a full native MCNP file. There should be a test for a DAG-MCNP file.

Material "defixer" program

Inputs:

  • dagmc-proproc'd geometry h5m file with volumes in material groups
  • Engineering materials library (in modified PyNE Material.text format?)

Outputs

  • Material definitions in any or all of the following formats: MCNP, ALARA, HDF5

Synopsis
The program will read all of the groups off an h5m geometry file. The user will then assign material definitions to each group. The actual definitions will be found in a separate material library in some canonical format. When assigning these materials the user will be able to:

  • Mix materials together (by mass or volume) to created new definitions
  • Specify the impurity tolerance
  • Override the default density (which is found in the external material library)
  • Specify cross sections. The program will warn the user if cross sections are not found for a particular isotope. When compositions contain elemental definitions the program will use some criteria to determine whether isotopic or elemental cross sections should be used.

The user can then output definitions in some format that can be read-in by other codes.

At first, this program will get user input from an input file. Eventually this program may also have a GUI. The GUI should write "journal files" similar to CubIT which contain valid input text.

Most of the material handing will be done using PyNE functions.

Adding metadata to 'gammas' (and other files)

I have added support* for comment lines (using #) at the top of the 'gammas' file, but have not implemented automatically creating this comment yet. The basis for doing this is to supply a bit of extra information for when someone looks at the gammas file at some later date (Andy's idea, back in September, I think). The contents I have in mind for this comment/header line are:

  • timestamp
  • cooling time
  • isotope
  • problem title
  • ?? other ideas?

The problem title would be analogous to MCNP's title card, and would be specified as a new parameter in the r2s.cfg file.

Is the problem title unnecessary? Do we want this concept added to other files in the workflow? (e.g. alara_input, phtn_src)

*support, as in, source.F90 simply skips over these lines

Export ADVANTG source biasing information to MOAB mesh

Initially we need to understand how this information exists within ADVANTG.

In theory, we just need the bias value for each voxel, stored in the 'PHTN_BIAS' tag, and our modified MCNP will pull the necessary bias values for photon transport.

Rerun R2S step 1 without mmgridgen

A common use case is to rerun the R2S methodology starting with neutronics but without changing the geometry/materials. Therefore it is useful to be able to repeat the workflow without doing the ray firing for the mmGridGen step.

Void rejection with uniform sampling in source.F90

Previously had thought that void fraction information for each voxel would be needed by MCNP in order to do void rejection with uniform sampling for photon transport... On pondering this, I believe, and think I have shown, that void rejection is easier to implement.

If MCNP finds that a particle starts in a void, we can resample within the same voxel (rather than resample entire problem) until a valid position is found.

I'm almost certain this is correct, and easily compatible with the current implementation (i.e. the 'gammas' file format for uniform sampling), but need to ponder/discuss, implement and test this. This is basically the same solution that I applied to void rejection when voxel sampling a few weeks ago.

Update r2s_setup/r2s.cfg/r2s_step2.py for unstructured workflow

The structured/unstructured workflows will probably coexist as options

  • add option to r2s_setup.py/r2s.cfg
  • identify any needed changes to r2s_step1.py
  • r2s_step2.py skips write_gammas.py as needed (still calls read_alara_phtn.py)
    • alternate passing of parameters used for photon transport may be done within r2s_step2.py

Improvement to Error message when meshtal file has no or 1 energy group

When we pass a meshtally file to r2s-act that has only 1 (or none) we get the error 'could not find header in the first 100 lines', I suggest we do some kind of check when reading this to see if we found the energy bin line and from this deduce the number of groups. If this is less than 175 (or another predetermined limit) we should provide a more useful error message.

Verify and correct uniform raytracing implementation

The uniform sampling approach described in Damien's Master's thesis is incorrect

Sampling a square that is 4x4 (x:[0,4]; y:[0,4]), and sampling 3 points in each direction.
You get points at 1, 2, 3 in each dimension (with the incorrect algorithm).
Each point 'corresponds' to a 1x1 square about it
Result is squares only cover the extents 0.5 to 3.5

Essentially, the edges of voxels are not being sampled.

The current implementation, via _linspace() method in mmgrid.py also uses this erroneous algorithm. (Though by default, mmgrid.py uses random point sampling)

Todo:

  • Create a unit test to verify whether we get the points desired (verify bug exists)
  • Fix code

Implement photon sampling void rejection

Currently, the custom source.F90 sampling routine can reject particles sampled in void (use case: partially void voxels) (used via the m option in gammas file's parameter line), though this is untested. In order to support this, the normalization done when creating the gammas file needs to be updated. In particular, the correct normalization requires knowing the fraction of void volume in each voxel. Thus:

  • void volume needs to be passed through the workflow.
    (since multiple gammas are often created, it's presumed more efficient to read volumes from MOAB than to recompute them when needed)
  • calculation of this alternate normalization needs to be added to the write_gammas.py script.

This may be rendered unnecessary by #3. Though, perhaps we also aim to support structured mesh workflows alongside unstructured mesh workflows.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.