Git Product home page Git Product logo

serpent-tools's Introduction

serpent-tools

Documentation Status

PyPi badge

Nuclear Science and Engineering 10.1080/00295639.2020.1723992

A suite of parsers designed to make interacting with SERPENT1 output files simple and flawless.

The SERPENT Monte Carlo code is developed by VTT Technical Research Centre of Finland, Ltd. More information, including distribution and licensing of SERPENT can be found at http://montecarlo.vtt.fi

Installation

serpentTools can be installed with pip using:

$ pip install serpentTools

For more detailed instructions, including operating-system specific instructions and building from source, see Installation Guide.

Issues

If you have issues installing the project, find a bug, or want to add a feature, the GitHub issue page is the best place to do that.

Support

The development of serpentTools is supported by the following organizations:

References

The Annals of Nuclear Energy article should be cited for all work using SERPENT. If you are using this project, please considering citing with

Also, let us know if you publish work using this package! We try and keep an up-to-date list of works using serpentTools2, and would be happy to include more.


  1. Leppanen, J. et al. (2015) "The Serpent Monte Carlo code: Status, development and applications in 2013." Ann. Nucl. Energy, 82 (2015) 142-150โ†ฉ

  2. https://serpent-tools.readthedocs.io/en/latest/publications.htmlโ†ฉ

serpent-tools's People

Contributors

dankotlyar avatar drewejohnson avatar drewj-usnctech avatar gridley avatar nicoloabrate avatar paulromano avatar rzehumat avatar sallustius avatar travleev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

serpent-tools's Issues

[ENH] Burned material reader

Implement a reader that can be imported from serpentTools.parsers that is capable of reading the bumat files.

Format

SERPENT Wiki

Requirements

  1. Create and multiple SERPENT material objects and related material data, i.e. temperature, density, thermal scattering library, etc
  2. Write out the material(s) with some templating engine as tables or SERPENT materials*
  • This feature could probably be a method on a Material supporting class. This would allow injection of materials from one file into another file down the road

Prerequisites

#12

Update settings from config file

Currently, the settings loader has to take settings one at a time. For the sake of consistency and ease, it would be nice to have the rc settings loader take a file path argument and load all settings from there.

I think a yaml file would be pretty straight forward, since we already use the pyyaml module. The loader could be setup to read in a strict, full name manner,

depletion.metadataKeys: ['ZAI', 'NAMES']
depletion.materials: ['fuel*', 'bp1']
verbosity: 'error'
...

or with nested levels

depletion:
    metadataKeys: ['ZAI', 'NAMES']
    materials: ['fuel*', 'bp1']
verbosity: 'error'
...

The two implementations should have the same result, just that in the second case there is some reconstructing that has to be done behind the scenes.

Issue template

File that details the best way to add issues and what information we would need in order to properly resolve the issues.

[Doc] Setup documentation at readthedocs

readthedocs is a great platform for hosting all our documentation. We can configure it to automatically build/update documentation at releases, and it supports downloading pdfs, meaning we wouldn't have to host a large pdf on our repo.

There are some potential issues/hurdles to overcome.

  • setting up webhooks for automatic build - should be automatic if @CORE-GATECH [the owner] imports the repo
  • use of non-standard libraries, e.g. numpy could be problematic. Maybe use some MagicMock modules to circumvent this

Extra resources

depmtx fails to read files with > 99 isotopes

For problems with 100 or more isotopes, the method for extracting the number densities by splitting the line at spaces fails

N0( 98, 1) =  4.887808621283664728856676388898E-15; % 300711
N0( 99, 1) =  1.741835954801968301462649249680E-12; % 300720
N0(100, 1) =  9.816655193917930272263199204822E-16; % 300730
....
N0(998, 1) =  2.934345789731494740253197564973E-08; % 61149.06c
N0(999, 1) =  3.177579481199193601768265931430E-11; % 611500
N0(1000, 1) =  8.673534004200875581773029997755E-09; % 61151.06c
N0(1001, 1) =  1.618244476764465444652257029493E-11; % 611520
N0(1002, 1) =  9.761232156374946714086809314449E-14; % 611521

The atomic density is found by taking the fourth element in the list of values split at spaces, which is % in this case.

The depmtx reader should be improved to be more robust.

Feature: History file reader

Implement a reader that can be imported from serpentTools.parsers that is capable of reading the history file.

Format

SERPENT Wiki

Requirements

  1. Store data for some or all variables present in the file, depending on user settings
  2. Plotting of quantities over time, such as Shannon entropy, cycle-wise criticality eigenvalues

Incorporate drewtils parsing tools natively

Some of our readers use parsing engines from the MIT licensed drewtils package I wrote earlier. These files are not heavy, and we can include them natively as to limit the number of external packages required for install.

[Feature] Ability to read multiple files and obtain true uncertainties

Create a new set of classes that are responsible for processing multiple output files of the same type and storing the true uncertainties.

Requirements

  • Each class should be given a list of files at construction
  • Support globs, '*'
  • Store metadata when applicable - things that are invariant through repeated runs
  • Store files used
  • Store average value and uncertainty for any quantity of interest, e.g. material number densities, cross sections for branching calculations, detector tallies, etc.

Optional

  • Store seeds used - optional argument at construction seed=True

Things to consider

  • Would it be advantageous to also retain all the data used, in a manner than allows people to identify runs with particularly large uncertainties?
  • Some plot commands will have to be modified, e.g. depletion reader plot -> error bars

Update reader read methods to include pre/post read checks

Implement reader-specific methods for inspections before and after reading. Pre-checks could include ensuring the file exists, simple validation that the file is of the correct type (e.g. not reading a detector file with a depletion reader), or counting the number of time steps present in the results file #7. Post-read checks could include verifying that data was indeed stored. If nothing is stored, Exceptions should be raised, perhaps a new one like EmptyReader that inherits from SerpentToolsException

In order to accomplish this, I propose refactoring the public read methods for all readers to private _read methods. The base reader will contain the public read method, and seek for precheck and postcheck methods specific to each file type. These do not have to be overwritten for each reader.

Implementation Suggestion

class BaseReader(object):

    def read(self):
        self._precheck()
        # maybe some debug/info statements
        self._read()
        self._postcheck()
    def _precheck():
         pass
    def _postcheck():
        pass
    def _read():
        raise NotImplementedError

class SubClass(BaseReader):

    def _read():
        # do the things
    def _postcheck():
        assert thingsWereDone

Pre/post check functions should log errors/warnings accordingly, such as an error if the word DET does not appear in the first fuel lines of a supposed detector file. When applicable, Exceptions should be raised if things go really poorly.

[ENH] Material object

Supporting object for storing SERPENT materials

Requirements

  1. Be created with a block of data straight from a SERPENT input or burned material file
  2. Store nuclides and their densities on the object
  3. Remove and/or update nuclides
  4. Update material values like temperature, volume, or thermal scattering data

Optional

  1. Output the data with some template engine like Jinja2 or Mako

Holding up

#10

Support: Homogenized Universe Object

Implement a supporting object that will be used by the branching reader and results reader that stores data for a single homogenized universe at a single instance in time.

Requirements

  1. Should store the universe number, and, optionally, day, burnup, and burnup step for this instance. Optional because for a problem with no burnup, those values will not be present in the file.
  2. Simple methods to add and get variable data, that will be sent from the readers.

Optional

  1. Output the data with some template engine like Jinja2 or Mako

Holding up

#6
#7

Feature: Implement a messaging and exception framework

Implement a overarching data logger that controls warnings, errors, and debug statements that allows the user to set the verbosity through the rc system. Maybe piggyback off of the logging module

Create a SerpentToolsError that is the base type for all critical errors thrown during operation.

Usage

rc['verbosity'] = 'debug'
# print all status updates, errors, and debug statements
rc['definitely not a setting'] = 'still not good'
# raises SerpentToolsError or some subclass thereof
rc['verbosity'] = 'quiet'
# print only critical errors, same as `critical`

[support] How to remove INFO messages?

Summary of issue

INFO messages are overwhelming output. Is there a way to suppress them? Thanks.

Code for reproducing the issue

import serpentTools
bF = 'feedbacks.coe'
bO = serpentTools.read(bF)
INFO : serpentTools: Inferred reader for feedbacks.coe: BranchingReader
INFO : serpentTools: Preparing to read feedbacks.coe
INFO : serpentTools: Done reading branching file

Expected outcome

import serpentTools
bF = 'feedbacks.coe'
bO = serpentTools.read(bF)
(no output)

Versions

  • Version from serpentTools.__version__
    '0.2.2'

  • Python version - python --version
    Python 3.6.4 (default, Mar 13 2018, 18:16:01)

Explicit attributes for supporting objects

Currently, supporting objects look for their attributes first in their own metadata, then in the metadata of the metadata of the parent. This poses some introspection ugliness, as it is difficult to determine what attributes are on the object.

Each supporting object should have its attributes explicitly stated and stored. This should improve performance and usability.

Maybe use __slots__ as well?

[Doc] Better Detector example

The current detector file is pretty small and doesn't show a lot of the power, and the plots are bad. Let's add a larger detector, with a lot of dimensions, and show off the slicing power!

Requests

  • Finer spatial mesh
  • Fine to super fine [>100] energy groups
  • Multiple reactions tallied

Document released readers

Provide documentation through Sphinx describing the various readers, their limitations and features, as well as some helpful examples.

[ENH] Implement improved universe iteration for branch containers

Implement a few iteration methods for the BranchContainer object that support:

  1. Yielding just the keys of the stored universes
  2. Yielding just the stored universes
  3. Yielding matching key: universe pairs

Possible options for sorting based on universe ID and burnup?

Example

Say we have some BranchContainer with the following universes:

{(0, 0, 0): <HomogUniv 0>, (0, 0.1, 1): <HomogUniv 1>, 
(1, 0, 0): <HomogUniv 4>, (1, 0.1, 1): HomogUniv 3>}

Option 1 would yield

(0, 0, 0)
(1, 0, 0)
(1, 0.1, 1)
(0, 0.1, 1)

Option 2 would yield

<HomogUniv 0>
<HomogUniv 2>
<HomogUniv 3>
<HomogUniv 1>

Option 3 would yield

(0, 0, 0), <HomogUniv 0>
(0, 0.1, 1), <HomogUniv 1>
(1, 0.1, 1), <HomogUniv 3>
(1, 0, 0), <HomogUniv 2>

Keys intentionally left out of order because dictionary keys are not sorted by default
If the methods had a sorted option, then the objects should be yielded in their sorted order.

But why

This could help with writing cross sections for nodal diffusion outputs by allowing faster and easier access to homogenized universes and thus group constant data

Possible implementation

We could take advantage of the six module's various dictionary iteration methods.

[ENH] Results file reader

Implement a reader that can be imported from serpentTools.parsers that reads data from the results file.

Format

SERPENT Wiki

Data is presented in three tiers:

  1. File wide parameters such as version, run time, and parallelization metrics
  2. For each burnup point:
    1. Cycle data, like criticality eigenvalues
    2. For each universe listed with set gcu:
      1. Micro and macro energy group boundaries [intermediate group structure and broad group]
      2. Micro and macro group fluxes
      3. Homogenized parameters, like infinite medium and critical leakage cross sections [if requested], and kinetic parameters

Requirements

  1. Store requested cross sections and other parameters for each universe at each burnup state
  2. Easy extraction of these data

Optional

  1. Rudimentary plotting of quantities over time, like criticality eigenvalues
  2. Output the data with some template engine like Jinja2 or Mako

Prerequisites

Clean up the code

There are some debug codes, if __name__ == '__main__', in some of the scripts. They should be removed. Test scripts, e.g. unittests, still need those to run the tests.

  • detector.py
  • todos in the root init.py

[ENH] Sensitivity file reader

Implement a reader that can be imported from serpentTools.parsers that can read the new sensitivity files [Serpent 2.1.29]

Install fails without setuptools

Running python setup.py install fails if setuptools is not installed.

Traceback (most recent call last):
  File "setup.py", line 3, in <module>
    from setuptools import setup
ImportError: No module named setuptools

Can fix by using a try/except block around this import and use distutils. The compatibility is not totally one-to-one, so the setup.py script will have to be modified

Version: develop branch
Python version - 2.7.12

Feature: Detector file reader

Implement a reader that can be imported from serpentTools.parsers that is capable of reading detector files.

Requirements

  1. Create and store unique detectors that represent the quantities present in the output file

Optional

  1. Create simple subclasses for single purpose detectors, such as mesh detectors that tally single quantities, or energy detectors that tally a single quantity over multiple energy bins

[Discussion] Better unit test framework

Our current unit test system is based off reading in files and then comparing attributes and containers to hard-coded values. I have a few issues with this framework, while it has been the easiest to implement and get running.

  1. Each test file is found by searching from the ROOT attribute found using __file__ - example: main __init__.py.
  2. We still get a lot of WARNING/ERROR statements in our testing, even when that should happen - example: build 163

Use of __file__

When __file__ is used, python will print out warnings stating something along the lines of

module references __file__

I don't really like this, and I think this may be cause for concern at other institutions moving forward. I've also found some evidence that use a __file__ may cause issues moving for building python distributions like wheels and eggs.

Proposal

Convert test files into importable objects, maybe in a serpentTools.data directory, that replace the various text files that are stored with the project. This way, people can try and recreate our tests or examples wherever they are with something like from serpentTools.data import bwr_det0 and the full test string (or a readable object) will be returned.

When we read from a file, the readers all call something like

with open(self.filePath, 'r') as out:
    <do stuff>

If the object that is imported from serpentTools.data.bwr_det0 has the necessary __enter__,__exit__, and read methods, then we should be able to directly read from these psuedo-files, something like StringIO

Excessive messages

When someone runs python setup.py test on our project, they will still see many error and warning messages as the code does what it does. These may give the impression that our tests and package are riddled with errors, even though the tests pass.

Proposal

We have the capabilities, through the settings.rc object to control the logger. If we simply set this logger level to CRITICAL or higher, then no messages should be raised. There may be even better ways to handle this, but I am not entirely sure at this point.

I welcome any thoughts on this matter to be posted here on this issue

[Feature] Add a command line interface for launching tasks

In talking with @sallustius, we thought a good extension of this package would be to add a command line interface (CLI). This would be launched with python -m serpentTools .... The interface would support subcommands that launch certain operations, like the one mentioned below.

Example

Our first example would be to reproduce a file multiple times over with unique seeds values each time. Options would include adding the seed directly (to be fed into random.seed), use the more random secrets module, and output directory to write the new files [default to current directory]. The required arguments would be the number of files to create and the original file.

Seeds could be generated from the random.getstate function, or by randomly choosing 10 digits and creating an integer that way. The new files would be created and added to the output directory, identified with their order in creation.

> python -m serpentTools seed --seed=123456 5 demo.inp --output=here
> ls here/
demo_0.inp
demo_1.inp
demo_2.inp
...
demo_10.inp

Requirements

  • Verbosity control for each subcommand
  • Load configuration file for each subcommand

Universes from branching file can be stored with negative burnup

Summary of issue

The burnup values in coefficient files can be negative, indicating units of days, not MWd/kgU
Link to SERPENT coef card. This leads to HomogUniv objects being created with negative values of burnup, which is slightly nonsensical.

Code for reproducing the issue

Read in a coefficient file that has negative values of burnup.

Actual outcome including console output and error traceback if applicable

For some branchContainer b in the branching reader:

>>> b.universes.keys()
dict_keys([(3102, 0.0, 1), (3102, -200.0, 6), (3102, -328.5, 9), (3102, -246.375, 7), (3102, -280.0, 8), (3102, -164.25, 5), (3102, -82.125, 3), (3102, -120.0, 4), (3102, -40.0, 2)])
>>> univ = b.universes[(3102, -328.5, 9)]
>>> univ.bu
-328.5
>>> univ.day
0

Expected outcome

Universes with non-negative burnup.
Maybe implement a better method for retrieving universes given values of days or burnup.

Versions

  • serpentTools.__version__ 0.2.1+12.g68709f6

[ENH] Cross section plotting for homogenized universes

We should implement some cross section plotting for homogenized universes. Currently, we only generate these universes from the branching reader, which does not have knowledge of the group structure. The results reader will, so we will have to do either accept an incoming group structure for the plot, or plot against group index if the group structure is missing.

I think we should also include the automated label formatting that the depleted material supports. This would be great for comparing plots against universes and/or against burnup, i.e. same cross section at BOL and 5 MWD/kgU burnup. Being able to use the universe name, burnup, day, and burnup step, as well as the cross section, in this formatting would be superb.

If we wanted to get real fancy, we could use the matplotlib math rendering to do something like $\sigma_f^\infty$ for infinite medium fission cross sections, for example.

Update branching file example to show off more features

The example that I'm working on for the branching reader doesn't really show off a ton of the functionality. It only has one homogenized universe and a few cross sections. I'm waiting for a more intricate run to complete with a few universes and more varied group constant data. Once that completes, the example notebook and section in the documentation will need to be updated.

Depletion reader failing at single line TOTAL

Example error, using version 1.0b0+25.g7e95c30 off the detector branch

In [29]: rc['depletion.processTotal'] = True

In [30]: dep = DepletionReader('pCEs1_dep.m')

In [31]: dep.read()
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-31-3870047e2ceb> in <module>()
----> 1 dep.read()

c:\users\ajohnson400\appdata\local\continuum\anaconda3\lib\site-packages\serpenttools-1.0b0+25.g7e95c30-py3.5.egg\serpentTools\parsers\depletion.py in read(self)
     68                 elif (('TOT' in chunk[0] and self.settings['processTotal'])
     69                       or 'MAT' in chunk[0]):
---> 70                     self._addMaterial(chunk)
     71         messages.debug('Done reading depletion file')
     72         messages.debug('  found {} materials'.format(len(self.materials)))

c:\users\ajohnson400\appdata\local\continuum\anaconda3\lib\site-packages\serpenttools-1.0b0+25.g7e95c30-py3.5.egg\serpentTools\parsers\depletion.py in _addMaterial(self, chunk)
     94     def _addMaterial(self, chunk):
     95         """Add data from a MAT chunk."""
---> 96         name, variable = self._getGroupsFromChunk(self._matchMatNVar, chunk)
     97         if any([re.match(pat, name) for pat in self._matPatterns]):
     98             self._processChunk(chunk, name, variable)

c:\users\ajohnson400\appdata\local\continuum\anaconda3\lib\site-packages\serpenttools-1.0b0+25.g7e95c30-py3.5.egg\serpentTools\parsers\depletion.py in _getGroupsFromChunk(self, regex, chunk)
    108             return match.groups()
    109         raise Exception('{} not determine match from the following chunk:\n'
--> 110                         '{}'.format(self, ''.join(chunk)))
    111
    112     def _processChunk(self, chunk, name, variable):

Exception: <DepletionReader reading pCEs1_dep.m> not determine match from the following chunk:
TOT_VOLUME = [ 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 5.97040E+04 ];

[Discussion] Include support for additional SERPENT versions

TL;DR: We should include support for additional versions of SERPENT. Other tasks related to making that happen are listed here and should be linked here as they come around.

As of 09d06ee, the project explicitly states that only serpent 2.1.29 settings file. 2.1.30 is also out, but some institutions may not use the most recent version of SERPENT, or even SERPENT 2. This issue serves as a collection of all the tasks/issues/pull requests related to extending our functionality towards other versions of SERPENT without losing the functionality of this project.

I will add some tasks here as they are realized, and possibly add those issues to a new projects board.

The main thing to look for is how variables are named across versions. Previous versions of SERPENT may have called a property multiple names. Before B1 homogenization was implemented, infinite medium cross sections were stored as TOT, rather than INF_TOT for example. If we want a script that works for outputs from serpent 1.0.0 to work for other versions as well without the user having to rename stuff, then we should be careful and open about how we treat these variables. I think this would be a great benefit for this project and for the users of this project.

If we come across a version that has a totally different output format, then we will have to implement a new version-specific reader for that file and version. This could be done with more subclassing and abstraction, but could get really tricky. ๐Ÿคž they don't change outputs too much

Subtasks

These are the big tasks, from my perspective, that will need to be accomplished in order to support additional versions.

  • Develop a systematic way to compare output files across SERPENT versions and identify what variables have been dropped, included, or modified.
  • Implement a method for renaming variables automatically, TOT stored under infTot to match the overall function of the project
  • Identify and document what functionality, if any, we are not able to support from previous versions.
  • Document what variables are remapped. The rstVariableSets script does a decent job at this by building off the convertVariableNames function

Some thoughts

We could create an additional repository that contains a basic reference model and directories for each serpent version. Each serpent version would be tasked with running the identical input file and producing the same output files*. An automated script could produce summary files of the data contained in each output file that could be compared across versions, e.g. a sorted list of all variables present in the results file. This knowledge can be used to inform the decisions in subtask 2.

Furthermore, this repository could be used to run more in-depth integration tests on a variety of serpent versions. There's some stuff we can do with submodules where we could link the two repositories together, maybe do some weekly/monthly TRAVIS testing to try and catch mistakes and breakable changes. It would be ideal if this level of testing could be integrated directly into this repository, so we can judge pull requests off their ability to not break support for previous versions as well.

DepletedMaterial cannot plot against burnup

Summary of issue

When using the DepeltedMaterial.plot method, passing 'burnup' as the x-value raises an error as if invalid time-points were passed into the method.

Code for reproducing the issue

Using the DepletionReader Notebook and changing In [18]: to fuel.plot('burnup', 'ingTox', names=['Xe135'], ylabel='Ingenstion Toxicity');

Error and traceback

KeyError                                  Traceback (most recent call last)
<ipython-input-24-87248272612a> in <module>()
      1 fuel.plot('burnup', 'ingTox', dayPoints, iso, 
----> 2           ylabel='Ingenstion Toxicity');

~/.local/lib/python3.5/site-packages/serpentTools-0.2.1+22.gc316460-py3.5.egg/serpentTools/plot.py in decorated(*args, **kwargs)
     47     @wraps(f)
     48     def decorated(*args, **kwargs):
---> 49         return f(*args, **kwargs)
     50     doc = dedent(f.__doc__)
     51     for magic, replace in PLOT_MAGIC_STRINGS.items():

~/.local/lib/python3.5/site-packages/serpentTools-0.2.1+22.gc316460-py3.5.egg/serpentTools/objects/materials.py in plot(self, xUnits, yUnits, timePoints, names, ax, legend, xlabel, ylabel, **kwargs)
    246         """
    247         xVals = timePoints or self.days
--> 248         yVals = self.getValues(xUnits, yUnits, xVals, names)
    249         ax = ax or pyplot.axes()
    250         labels = names or self.names

~/.local/lib/python3.5/site-packages/serpentTools-0.2.1+22.gc316460-py3.5.egg/serpentTools/objects/materials.py in getValues(self, xUnits, yUnits, timePoints, names)
    139                 'Isotope names not stored on DepletedMaterial '
    140                 '{}.'.format(self.name))
--> 141         colIndices = self._getColIndices(xUnits, timePoints)
    142         rowIndices = self._getRowIndices(names)
    143         return self._slice(self.data[yUnits], rowIndices, colIndices)

~/.local/lib/python3.5/site-packages/serpentTools-0.2.1+22.gc316460-py3.5.egg/serpentTools/objects/materials.py in _getColIndices(self, xUnits, timePoints)
    163         if timePoints is None:
    164             return numpy.arange(len(allX), dtype=int)
--> 165         self._checkTimePoints(allX, timePoints)
    166         colIndices = [indx for indx, xx in enumerate(allX) if xx in timePoints]
    167         return colIndices

~/.local/lib/python3.5/site-packages/serpentTools-0.2.1+22.gc316460-py3.5.egg/serpentTools/objects/materials.py in _checkTimePoints(self, actual, requested)
    156             raise KeyError(
    157                 'The following times were not present for material {}'
--> 158                 '\n{}'.format(self.name, ', '.join(badPoints)))
    159 
    160     def _getColIndices(self, xUnits, timePoints):

KeyError: 'The following times were not present for material fuel0\n5, 10, 30'

Expected outcome

A plot with burnup as the x axis

Versions

  • serpentTools.__version__ 0.2.1+22.gc316460

  • python --version 3.5

  • Jupyter notebook - 5.2.2

Edit

Updated code to replicate since the values dayPoints were specifically defined as points in the days vector and would definitely not be present in the burnup vector

Overly-complicated to access some branches

Summary of issue

With SERPENT, it is possible to create a 1D branching vector by directly specifying the branches, rather than their permutations. Two coefficient matrices from the wiki are

coef 11 0 0.1 1 3 5 10 15 20 30 40 50
3 nom Blo Bhi
3 nom Clo Chi
3 nom Flo Fhi
2 nom CR

and

coef 11 0 0.1 1 3 5 10 15 20 30 40 50
10 BR01 BR02 BR03 BR04 BR05 BR03 BR07 BR08 BR09 BR10

For the latter, the branches dictionary would be a series of one-valued tuples
{('BR01', ): <>, ('BR02',): <>, ... ('BR10', ): <>}.
Accessing the branches would require entering the full tuple, branches[('BR01', )]

Code for reproducing the issue

Read in a branching file with a coefficient matrix structured like the second block above

Actual outcome including console output and error traceback if applicable

Attempting to access branch 'BR01' with branches['BR01'] raises a KeyError

Expected outcome

Return the branch rather than raising an error

Versions

  • Version from serpentTools.__version__ 0.3.0+1. g09d06ee

Days are still returned in time points are given for a vector quantity on materials, e.g. burnup

Fix in #2 did not take in to account quantities like burnup and volume that do not return arrays for isotope quantities.
If the user specifies one of these quantities from a depleted material, the time points are still returned.

if allY.shape[0] == 1 or len(allY.shape) == 1:  # vector
    return xVals, allY[colIndices] if colIndices else allY

change to

if allY.shape[0] == 1 or len(allY.shape) == 1:  # vector
    yVals = allY[colIndices] if colIndices else allY
else:
    yVals = numpy.empty((len(rowIndices), len(xVals)), dtype=float)
    for isoID, rowId in enumerate(rowIndices):
        yVals[isoID, :] = (allY[rowId][colIndices] if colIndices
                                 else allY[rowId][:])`

and fix unit tests

Feature: Python 2 support

Currently the project, and CI testing, works for python 3.5/3.6.
A lot of people still use python 2.7, and supporting should be pretty easy with the help of the six package

Requirements

  • Add tests to .travis.yml for testing python 2.7 (2.6?)
  • Update the project to work with those versions

[Feature] Chart of nuclides visualization?

Hey folks (probably just Drew),

Do you think adding a visualization tool for DepletedMaterials, with patches in each spot on the cart of the nuclides colored logarithmically or linearly by concentration, would be interesting? May make for some lovely plots comparing spent fuels from various reactors, etc.This shouldn't be incredibly hard to implement if matplotlib patches are used and colored according to a colormap. In particular, getting some time series animations made of the chart of the nuclides would be quite intersting IMO.

`getXY` for `DepletedMaterial` raises unhelpful value error if days missing from object

If a depleted material is asked to get some quantity for days that are not present, the following error is raised:

  File "C:\Users\ajohnson400\AppData\Local\Continuum\Anaconda3\lib\site-packages\serpenttools-0.1.1rc0-py3.5.egg\serpentTools\objects\__init__.py", line 162, in getXY
    else allY[rowId][:])
ValueError: could not broadcast input array from shape (19) into shape (0)

The value of colIndices is an empty list for this case.

[ENH] Fission Matrix Reader

Implement a reader that can be imported from serpentTools.parsers dedicated to reading fission matrix data.

[ENH] Convergence check with History Reader

It would be nice to implement a method/attribute on the HistoryReader that does some convergence checks. Look at the slope of various parameters, primarily Shannon entropy for fission source convergence, and return/store a value indicating if that attribute is well converged, i.e. slope below some threshold.

Energy groups in detectors are stored from low-high

Summary of confusion

The detector grids, as presented in the serpent outputs, are given from lowest energy group to highest. This means, in our examples, when we plot the "fast" group data, bu fix={'energy': 0}, we actually are grabbing the lowest energy group.

When we go through the slicing routine, we are grabbing the "0th" position from the energy grid and first group presented in the output file. Since this goes against the standard group notation, with lower numbers indicating higher energy, we should either

  1. make this explicitly clear that energy groups are reversed, or
  2. transpose the tally, error, and score data along the axis corresponding to energy, likewise for the energy grids just to be consistent.

Code for reproducing the confusion

Work through the example

Versions

  • Version from serpentTools.__version__ 0.2.2

This isn't really an error, but a potential point of confusion

Feature: Branching file reader

Implement a reader that can be imported from serpentTools.parsers dedicated to reading a branching file.

Format

SERPENT Wiki

Requirements

  1. Store the multidimensional matrix of coefficients for various perturbed states
  2. Systematic method to extract data for a specific universe at a specific branch point

Optional

  1. Output the data with some template engine like Jinja2 or Mako

Prerequisites

Failing Jupyter build

Summary of issue

Travis build with python 2.7 failed to run the jupyter notebooks.

All future builds will appear to fail because of this issue. A fix to this would potentially use some
virtual environment to ensure that

Code for reproducing the issue

See Travis build #93.3

Actual outcome including console output and error traceback if applicable

[NbConvertApp] Converting notebook examples/Branching.ipynb to html
[NbConvertApp] Executing notebook with kernel: python3
Traceback (most recent call last):
  File "/home/travis/virtualenv/python2.7.14/bin/jupyter-nbconvert", line 11, in <module>
    sys.exit(main())
...
# Some 100 lines of traceback
jupyter_client.kernelspec.NoSuchKernel: No such kernel named python3

Expected outcome

Jupyter notebooks don't fail

Versions

  • Version from serpentTools.__version__ 0.1.0

  • Python version - python --version 2.4.17

  • Jupyter version look like 1.0.0 using notebook 5.2.2 - here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.