Git Product home page Git Product logo

enzo-dev's Introduction

ENZO

ENZO IS AN OPEN SOURCE CODE. We encourage you to take it, inspect it, use it, and contribute back any changes you have made. We strive to make the the Enzo community a community of developers.

RESOURCES

Enzo's main webpage is:

Enzo is developed in the open on github.com:

Documentation, including instructions for compilation, can be found at:

Please subscribe to the Enzo Users' mailing list at:

If you are interested in Enzo development, you may want to sign up for the Enzo Developer's mailing list as well:

If you have received this source code through an archive, rather than the git version control system, we highly encourage you to upgrade to the version controlled source, as no support can be provided for archived ("tarball") sources.

REQUIREMENTS

Mandatory:

  • C/C++ and Fortan90 compiler
  • MPI (such as OpenMPI, MPICH, or IntelMPI) for multi-processor parallel jobs
  • HDF5 (serial version) for data outputs

Optional:

  • yt for data analysis and visualization (highly recommended)
  • Grackle, a chemistry and radiative cooling library with support for Enzo
  • KROME, a chemistry and microphysics library with support for Enzo

DEVELOPERS

Many people have contributed to the development of Enzo -- here's just a short list of the people who have recently contributed, in alphabetical order:

enzo-dev's People

Contributors

aemerick avatar brittonsmith avatar bwoshea avatar cbrummelsmith avatar chummels avatar clairekope avatar cms21 avatar dcollins4096 avatar drenniks avatar drreynolds avatar galtay avatar gregbryan avatar gsiisg avatar ibutsky avatar jobordner avatar jsoishi avatar jwise77 avatar kohdaegene avatar matthewturk avatar peeples avatar pgrete avatar pwang234 avatar rpwagner avatar samskillman avatar stephenskory avatar suniverse avatar unitarymatrix avatar yipihey avatar yl2501 avatar yusuke-fujimoto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enzo-dev's Issues

updating docs

Original report by chummels (Bitbucket: chummels, GitHub: chummels).


I was looking through the new 2.2 documentation, and it seems like there still remain some holdovers from past eras of enzo which no longer apply. Examples include:

  • references on the front page to James Bordner's continuous regression testing, which has now been replaced by the internal test suite in 2.2
  • updates to the enzo public license to 2012 (or 2013)
  • references to enzo.googlecode.com?
  • make some new references to get the most up-to-date version of the docs at enzo.readthedocs.org

Fortunately, with our new use of readthedocs.org, any time a modification occurs in the docs, it will immediately be rebuilt and posted to the enzo.readthedocs.org website, to assure people have access to the most current version of the documentation.

Examine the max(..., 0.5*geslice) behavior in euler.F

Original report by dcollins4096 (Bitbucket: dcollins4096, GitHub: dcollins4096).


At Line 191 in euler.F, there's a someone disconcerting "max" statement. It is possible that this statement is causing other poor behaviors in the code. Possible things to test:

-- remove the line, see what the code does

-- Install write statements at the point to see if it ever gets actually triggered (my suspicion is that it won't, between the CFL and cooling time criteria on dt)

-- try reformulating this as a timestep criterion

All of @samskillman @gbryan @jwise77 have expressed interest in this one.

d.

Pressureless Collapse Does Not Finish

Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).


#!txt
Currently this hits the Cycle limit of 100,000.  dt goes to 0.0.

Early on I get this warning:
MPI_Init: NumberOfProcessors = 1
warning: the following parameter line was not interpreted:
SubcycleSafetyFactor   = 2       // 

Starts showing signs of failure on cycle 64:

TopGrid dt = 7.908641e-04     time = 0.10322590090186    cycle = 64
Level[0]: dt = 0.000790864  0.000790864 (0.000790864/0.000790864)
 eu1                    4                    1 -1.29180098360730866E-005  3.03920109296982320E-011  2.96298713606934521E-011 -8.19497254999048720E-003   3.0787524511602937       2.46618969282207936E-003 -1.31676979731730542E-005
 eu1                  103                    1 -1.29180098360658241E-005  3.03920109296985551E-011  2.96298713606937752E-011  8.19497254998794410E-003  -3.0787524511602271      -2.46618969282202602E-003 -1.31676979731664863E-005
EvolveLevel[0]: NumberOfSubCycles = 1 (65 total)
RebuildHierarchy: level = 0
CPUTime-output: Frac = 1.000000, Current = 0.0364301 (0.0364144), Stop = 2592000.000000, Last = 0.000526905
dt, Initialdt: 0.000784819 0 
TopGrid dt = 7.848190e-04     time = 0.10401676499578    cycle = 65

By cycle 10000, we are taking tiny timesteps:
TopGrid dt = 3.855508e-56     time = 0.34083068217033    cycle = 10000


Changes to the Enzo test suite

Original report by Brian O'Shea (Bitbucket: bwoshea, GitHub: bwoshea).


Please comment on this issue to suggest ways that we can improve the Enzo testing infrastructure (documented here) . Some ideas are:

  • What test problems should we add to, or remove from, the quick, push, and full suites? (These are defined at the top of this page, but 'quick' is meant to run in a few minutes, 'push' is meant to do fairly comprehensive tests for pull requests, and 'full' is basically all test problems.
  • What should be tested for pull requests but is not current? (For example, testing with multiple compilation options.)
  • Are there any enhancements to the testing framework itself that would make it more useful or user-friendly?
  • Are there any enhancements to the test suite documentation that would make it more helpful?

Test suite failing with single precision

Original report by Forrest Glines (Bitbucket: forrestglines, GitHub: forrestglines).


When compiling with single precision, several tests from the test suite either fail their checks or fail to complete the simulations without errors. However, different tests fail on different machines. This is using the make configurations outlined for compiling for CUDA, but without running with CUDA

i.e. with these for the make config

#!bash
make integers-32
make precision-32
make particles-32
make particle-id-32
make inits-32
make io-32

and with this in the makefile

#!bash
MACH_FFLAGS_INTEGER_32 =
MACH_FFLAGS_INTEGER_64 = -i8
MACH_FFLAGS_REAL_32 =
MACH_FFLAGS_REAL_64 = -r8

Dark Particle Splittling

Original report by John Regan (Bitbucket: john_regan, ).


Invoking Dark Matter only particle splitting causes a seg fault on line 355 of particle_splitter.F. This is likely because of attributes not being correctly allocated before being passed to the fortran routine. The bug is easily reproduced by settting ParticleSplitterIterations = 1 and restarting. This was discovered in a non-star particle run.

CoolingTest_Grackle error

Original report by Daniel Reynolds (Bitbucket: drreynolds, GitHub: drreynolds).


When generating a local gold-standard of the "push" suite of test problems, the CoolingTest_Grackle test problem immediately fails due to a missing input file, metal_cool.dat. It looks like this file is missing from the enzo-dev repository, so if someone could add it in, I imagine that this test would pass.

That said, the default configuration uses "grackle-no" by default, so I wonder whether this test should run in the first place?

As this was the only test that failed when generating the local standard, once this is fixed then I think everything should be fine.

Anyways, here's the estd.out file from running CoolingTest_Grackle:

$ cat enzo-gold/5d6653715fb6/Cooling/CoolingTest_Grackle/estd.out
MPI_Init: NumberOfProcessors = 1
warning: the following parameter line was not interpreted:
use_grackle = 1
warning: the following parameter line was not interpreted:
UVbackground = 0
InitializeRateData: NumberOfTemperatureBins = 600
InitializeRateData: RadiationFieldType = 0
****** ReadUnits: 4.906565e+31 1.670000e-24 3.085700e+18 3.155700e+11 *******
Caught fatal exception:

'Error opening metal cooling table metal_cool.dat
'
at ReadMetalCoolingRates.C:40

Backtrace:

BT symbol: ./enzo.exe() [0x40aaa3]
BT symbol: ./enzo.exe() [0x88510c]
BT symbol: ./enzo.exe() [0x7ed79e]
BT symbol: ./enzo.exe() [0x891fb9]
BT symbol: ./enzo.exe() [0x7e8599]
BT symbol: ./enzo.exe() [0x40a372]
BT symbol: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fc145438ec5]
BT symbol: ./enzo.exe() [0x409229]
*** Error in `./enzo.exe': free(): invalid pointer: 0x00000000068565c8 ***
[0]0:Return code = 0, signaled with Aborted

ProtostellarCollapse_Std Fails

Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).


#!txt
calc_dt returns NaNs. Probably has something to do with initialization:
warning: the following parameter line was not interpreted:
GravityBoundaryFaces      = 1 1 1    // isolating in all directions
warning: the following parameter line was not interpreted:
GravityBoundaryRestart     = 0       // read boundary restart if possible
warning: the following parameter line was not interpreted:
GravityBoundaryName       = potbdry  // default boundary restart file
****** ReadUnits:  1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 *******

DATA dump: ./DD0001/pc_amr_
WriteAllData: writing group file ./DD0001/pc_amr_0001.cpu0000
DATA dump: dumpdirname=(./DD0001) == unixresult=0
Continuation Flag = 1
 calc_dt                       NaN                       NaN                    4                    4                    4

Handling Sensitive Answer Tests

Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).


Many tests are currently very sensitive to compilers/optimizations. An incomplete list of these failing tests include: AdiabaticExpansion, CollideTest, ProtostellarCollapse_Std, PhotonTestAMR

We should figure out a way to handle different sensitivities. Perhaps a flag during testing such as: --rtol=1.0e-7, much like what is used in the nose testing framework.

compilation failure using 32 precision

Original report by Daegene Koh (Bitbucket: dkoh, GitHub: dkoh).


In particular, the error comes from Grid_RotatingDiskInitalizeGrid.C.

The prototype for RotatingDiskInitializeGrid() has parameters cast in FLOAT
while in Grid.h the parameters are float.

I'm not sure what the intended types are.

Enzo documentation updates/additions

Original report by Brian O'Shea (Bitbucket: bwoshea, GitHub: bwoshea).


This issue is a place to identify documentation for enzo-dev (Enzo 2.x) that could be added or improved, or that is inaccurate. This includes docs for parameters, physics, setting up and using the code, and the test suite. Please make suggestions below, and if you have a particular page in mind please include a link to the appropriate web page.

The Enzo documentation can be found online at https://enzo.readthedocs.io/en/latest/ .

RadiativeTransferLoadBalance Crash without Photons/Sources

Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).


RT simulations can crash when RadiativeTransferLoadBalance is ON, yet when there are either no sources or no photons (not entirely sure which one is the cause). This could likely be an easy(ish) fix but just having a check for any sources present before doing any load balancing.

Plans are to address this at Enzo workshop 2017

GPL violation in use of mercurial

Original report by Nathan Goldbaum (Bitbucket: ngoldbaum, GitHub: ngoldbaum).


Currently Enzo's build system directly imports mercurial:

https://bitbucket.org/enzo/enzo-dev/src/240af05dd312d4d34a13cd6544f0ef63efbfdf77/src/enzo/create_config_info.py?at=week-of-code&fileviewer=file-view-default#create_config_info.py-19

Since Enzo is BSD-licensed, this is not allowed because directly importing mercurial implies that the python code that imports it must be GPL licensed. See https://www.mercurial-scm.org/wiki/MercurialApi for more details.

Instead, we should be talking to mercurial over the python-hglib command server. This will also allow us to support python installations based on python3, since python-hglib is available under python3.

Removing unused arguments in SF routines

Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).


I just noticed that many of the star formation routines are passed the cooling time, but never do anything with it. This is a pretty minor thing, but in light off all the PR activity and upcoming workshop I felt motivated to suggest cleaning this up a bit if it is worthwhile.

I'm happy to do this and submit the PR, just wanted to know if the PR would be accepted before starting. If I do this, I'll likely clean other unused parameters in the SF routines if I spot them.

1D AMR problems seg fault on some platforms

Original report by Nathan Goldbaum (Bitbucket: ngoldbaum, GitHub: ngoldbaum).


Currently 1D AMR problems crash on some platforms (OS X seems to be particularly affected):

The InteractingBlasWaves problem crashes very quickly with the following traceback:

#0  0x00007fff94c72866 in __pthread_kill ()
#1  0x00007fff9408535c in pthread_kill ()
#2  0x00007fff92f3ab1a in abort ()
#3  0x00007fff95336690 in szone_error ()
#4  0x00007fff9533819c in tiny_free_list_remove_ptr ()
#5  0x00007fff95334127 in szone_free_definite_size ()
#6  0x000000010038c657 in ProtoSubgrid::ShrinkToMinimumSize (this=0x105a0cd30) at ProtoSubgrid_ShrinkToMinimumSize.C:101
#7  0x000000010031ca3d in IdentifyNewSubgridsBySignature (SubgridList=0x10128ab10, NumberOfSubgrids=@0x7fff5f5e2d58) at IdentifyNewSubgridsBySignature.C:52
#8  0x00000001000e2884 in FindSubgrids (Grid=0x104f6dc50, level=1, TotalFlaggedCells=@0x7fff5fbfdd38, FlaggedGrids=@0x7fff5fbfdd30) at FindSubgrids.C:126
#9  0x00000001003b316c in RebuildHierarchy (MetaData=0x7fff5fbff388, LevelArray=0x7fff5fbfe410, level=0) at RebuildHierarchy.C:397
#10 0x00000001000b49f5 in EvolveHierarchy (TopGrid=@0x7fff5fbff368, MetaData=@0x7fff5fbff388, Exterior=0x7fff5fbfe5a0, ImplicitSolver=0x522f6412, LevelArray=0x7fff5fbfe410, Initialdt=0) at EvolveHierarchy.C:282
#11 0x0000000100002257 in main (argc=3, argv=0x7fff5fbff7c8) at enzo.C:753

The specific line its crashing on is when the GridFlaggingField is deleted during subgrid construction.

I also see crashes for ShockInABox (traceback) and all of the AMR Toro tests except for Toro3 and Toro5 (traceback for Toro1 AMR).

Initialdt Documentation & Usage

Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).


The current documentation claims that this value is the Initialdt of the current timestep. In fact, this parameter is the Initialdt the simulation should use when starting/restarting, but is immediately reset to 0 (used for logic). This should be more clear.

Should the initial top grid timestep for the current/last timestep be saved during output?

Missing Parameters in the Enzo Parameter List

Original report by Danielle Skinner (Bitbucket: drenniks, GitHub: drenniks).


I've noticed there are many parameters that are either missing or have incomplete descriptions. I've compiled a list of parameters from a simulation that I have been working with that are not in the parameter list. This may not include all parameters on the webpage that don't have descriptions.

I think it would be useful to get these updated. What I am asking is for people to take a look at the Google Doc file at the end of this description, and add a description to whatever parameters they can. Once all the parameters are finished, I will submit a pull request to update the documentation. That way all the parameter updates will be contained in a single pull request.

After each parameter, I put a small description about the parameters status in the parameter list.
Here is a link to the google doc: https://docs.google.com/document/d/1sbv_67BV_koOsldsjpx1oycpGvZTFB2ZjdpE9_LcKvo/edit?usp=sharing

Consistent definition of physical constants

Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).


Hi everyone,

One thing that has always bothered me are the separate definitions for solar mass throughout the code. This is 1.989e33 in many places, but is defined in physical_constants.h as SolarMass = 1.9891e33.

I would assume there may be similar inconsistencies with other physical constants. Should we push to have everything uniformly defined as in physical_constants.h? If so, I can go through the code and replace all locally defined physical constants with the constants defined in physical_constants.h.

I wouldn't be surprised if this leads to large enough changes to answers to fail the test-suite.

Improvement of MustRefineParticle Methods

Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).


Goal is to improve the must refine particle methods to allow user to more easily flag particles as must refine particles for a variety of non-trivial conditionals. Currently must refine particles are determined by particle type in a list of particle types, or by particle mass, with the conditionals as to whether or not a given particle is a must refine particle handled in a Fortran routine.

This improvement would be to move the conditionals entirely to the C function that calls the Fortran routine. In the C function, we would generate a flagging array that on-the-fly flags particles as must refine, passing this flagging array only to the Fortran routine (rather than both particle mass and type arrays). This will allow for more complex conditionals. For example, making a star particle a must refine particle, but only at the end of its life when it injects feedback.

Moving the public face of enzo entirely to enzo-project.org (and bb)

Original report by chummels (Bitbucket: chummels, GitHub: chummels).


I think it can be confusing for new (and old) users to find various locations for sometimes disparate information about a code like enzo. We've done a pretty good job of removing references to the LCA page and all of its old versions of enzo. But now we have http://enzo-project.org and enzo.googlecode.com, which are two separate places that we need to keep up to date (and are currently not in sync with each other). What further complicates issues is that there are virtually no references that I can find about us using bitbucket (only 1 in the dev section), even though it is the main avenue by which we all interact with the code.

It seems to me the only reason we keep up the enzo.googlecode.com website is to provide a location for the "stable" versions of the code to be downloaded (e.g. 2.0, 2.1, 2.2, etc.). It also seems like enzo.googlecode.com was preferred before we had built up the enzo-project.org website (which IMO is much nicer), but now there is so much crosstalk between the two that it seems very confusing (and difficult to keep everything up to date if we must update the docs, and then two websites with relevant information every time we modify something).

So what I'm asking is, can we migrate everything to just sit on the enzo-project.org website (boot camp, tarballs, content); delete the enzo.googlecode.com website; remove all references to enzo.googlecode.com; and continue to do everything through enzo-project.org with a short rope to bitbucket for those who want to get the code?

I may have missed some significant reasons for keeping the googlecode website, so please correct me if I'm wrong, but I think my proposition would streamline our public face a lot for new users.

Recent versions of NumPY break performance_tools.py

Original report by Duncan Christie (Bitbucket: dachrist, GitHub: dachrist).


Updating Numpy to recent versions -- I tried with 1.15.3 but not 1.16.0 that was released a few days ago -- seems to break performance_tools.py. It works without errors with 1.11.3, which I have previously been using.

The specific error returned is:

Traceback (most recent call last):
File "/home/dachristie/ENZO-Adding-AD/enzo-dev-adding-ad/src/performance_tools/performance_tools.py", line 1014, in
p = perform(filename)
File "/home/dachristie/ENZO-Adding-AD/enzo-dev-adding-ad/src/performance_tools/performance_tools.py", line 187, in init
self.data = self.build_struct(filename)
File "/home/dachristie/ENZO-Adding-AD/enzo-dev-adding-ad/src/performance_tools/performance_tools.py", line 276, in build_struct
data[line_key][i] = line_value
ValueError: setting an array element with a sequence.

Unused Parameters Should be Removed

Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).


I've seen a few parameters that are only read in and written out. These should be removed.

Here is a running list (Please edit as found):

#!text
GreensFunctionMaxNumber
GreensFunctionMaxSize

And here is a list of all global_data.h parameters variables in fewer than 4 files (lots of false positives for unused, but maybe still a useful list),
found using http://paste.yt-project.org/show/3338/:

#!text

PreviousMaxTask
debug2   
CurrentProblemType
TimestepSafetyVelocity
DimUnits 
DimLabels
BaryonSelfGravityApproximation
S2ParticleSize
GreensFunctionMaxNumber
GreensFunctionMaxSize
GloverRadiationBackground
GloverOpticalDepth
EvolveRefineRegionNtimes
EvolveRefineRegionTime
EvolveRefineRegionLeftEdge
EvolveRefineRegionRightEdge
StaticPartitionNestedGrids
First_Pass
DepositPositionsParticleSmoothRadius
ExternalBoundaryField
NodeMem  
NodeMap  
PrevParameterFileName
WaitComm 
filePtr  
tracename
Start_Wall_Time
End_Wall_Time
flagging_count
in_count 
out_count
moving_count
flagging_pct
moving_pct
memtracePtr
traceMEM 
memtracename
StarParticlesOnProcOnLvl_Position
StarParticlesOnProcOnLvl_Velocity
StarParticlesOnProcOnLvl_Mass
StarParticlesOnProcOnLvl_Attr
StarParticlesOnProcOnLvl_Type
StarParticlesOnProcOnLvl_Number
RKOrder
SmallEint
CoolingCutOffDensity1
CoolingCutOffDensity2
CoolingPowerCutOffDensity1
CoolingPowerCutOffDensity2
CoolingCutOffTemperature
HaloMass
HaloConcentration
HaloRedshift
HaloCentralDensity
HaloVirialRadius
ExternalGravityConstant
ExternalGravityPosition
ExternalGravityOrientation
ShiningParticleID
TotalSinkMass
NBodyDirectSummation
StageInput
LocalPath
GlobalPath
yt_parameter_file
conversion_factors
my_processor
pix2x
pix2y
x2pix
y2pix
PhotonMemoryPool
TotalEscapedPhotonCount
PhotonEscapeFilename
IsothermalSoundSpeed
RefineByJeansLengthUnits
MBHParticleIOTemp
OutputWhenJetsHaveNotEjected
current_error
ClusterSMBHAccretionEpsilon
ExtraOutputs

I think it would be good to compile a list and do it all in one go.

Mismatch arguments in FORTRAN call to star_feedback_ssn?

Original report by Greg Bryan (Bitbucket: gbryan, GitHub: gbryan).


It looks like the last three arguments in the call to star_feedback_ssn are missing in Grid_StarParticleHandler:

Grid_StarParticleHandler.C has:

#!c++
extern "C" void FORTRAN_NAME(star_feedback_ssn)(
    int *nx, int *ny, int *nz,
    float *d, float *dm, float *te, float *ge, float *u, float *v,
    float *w, float *metal,
    int *idual, int *imetal, hydro_method *imethod, float *dt,
    float *r, float *dx, FLOAT *t, float *z,
    float *d1, float *x1, float *v1, float *t1,
    float *sn_param, float *m_eject, float *yield,
    int *nmax, FLOAT *xstart, FLOAT *ystart, FLOAT *zstart,
    int *ibuff, int *level,
    FLOAT *xp, FLOAT *yp, FLOAT *zp, float *up, float *vp, float *wp,
    float *mp, float *tdp, float *tcp, float *metalf, int *type,
    int *explosionFlag,
    float *smthresh, int *willExplode, float *soonestExplosion,
    float *gamma, float *mu,
    float *te1, float *metalIIfield, float *metalIIfrac, int *imetalII,
    float *s49_tot, int *maxlevel);

while star_maker_ssn.F has:

#!FORTRAN

      subroutine star_feedback_ssn(nx, ny, nz,
     &                      d, dm, te, ge, u, v, w, metal,
     &                      idual, imetal, imethod, dt, r, dx, t, z,
     &                      d1, x1, v1, t1, sn_param, retfr, yield,
     &                      npart, xstart, ystart, zstart, ibuff, level,
     &                      xp, yp, zp, up, vp, wp,
     &                      mp, tdp, tcp, metalf, type,
     &                      explosionFlag,smthresh,
     &                      willExplode, soonestExplosion, gam, mu,
     &                      te1, metalSNII,
     &                      metalfSNII, imetalSNII,
     &                      s49_tot, maxlevel,
     &                      distrad, diststep, distcells)


I think the last three arguments are just missing (but not sure if there are other things missing)? I think Nathan used and tested this, so I'm guessing the error snuck in during the merge...

Cleanup/Streamline CUDA build config

Original report by Philipp Grete (Bitbucket: pgrete, GitHub: pgrete).


Cuda build variable(s) are slightly off, e.g., MACH_LIBS_INCLUDES in
Make.config.assemble:836 ASSEMBLE_CUDA_INCLUDES = $(MACH_LIBS_INCLUDES) is never used and probably should read MACH_INCLUDES_CUDA as other parts in the machinery.

More descriptive test_results.txt

Original report by chummels (Bitbucket: chummels, GitHub: chummels).


Right now there is only a small amount of useful information dropped into the test_results.txt file after the completion of a test suite run. There is far more useful information that drops to STDOUT during runtime. It would be beneficial to clean up test_results.txt and provide more information for debugging possible failures / errors.

Compile-time option documentation improvement

Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).


The documentation on compile-time options needs improvement. The existing descriptions could be more precise. In particular, parameters that sound important but may or may not do anything should be defined better (like max-tasks-per-node-N).

I think three general things could be improved:

  1. Clearer descriptions on all (or mostly all) parameters

  2. Denoting which parameters really should never be moved from the default, and why (and in what situation you may want to do this)

  3. Some may not be used, or changing them from default may break everything. Long term fix would be to remove these compile time options and associated code. Short term fix is mark these as "do not touch or does nothing".

I'm happy to make the changes myself and issue a PR as long as people include updated descriptions on parameters here. I can collate and update.

Consider Omega_radiation in cosmology

Original report by John Wise (Bitbucket: jwise77, GitHub: jwise77).


At very high redshifts, the radiation energy density will have some cumulative effects. There have some off-line requests for this feature, so I'm making an issue. I don't believe it should be too hard to add because only a few files (CosmologyCompute*) would have to modified.

MaximumGravityRefinement and FastSiblingLocator causes inconsistency

Original report by dcollins4096 (Bitbucket: dcollins4096, GitHub: dcollins4096).


The MaximumGravityRefinementLevel will cause incorrect results due to the fact that the SiblingList is not repopulated. Please stop using MaximumGravityRefinement until this is resolved.

PrepareDensityField, line 116,
level = min(level, MaximumGravityRefinementLevel);
and then the grid array is set from the level (which in my case was level=1, while I had MaximumRefinementLevel = 2)

It then calls

PrepareGravitatingMassField2a(Grids[grid1], grid1, SiblingList,
MetaData, level, When);

where everything except SiblingList is referencing level=1. But SiblingList ws generated from level=2. Which is then passed into PrepareGravitatingMassField2a, which does some particle overlap stuff, namely calling CheckForOverlap on the GridList (on level 1) and things in the SiblingList (from level=2)

Possible solutions:
-- Storing the SiblingList of MaximumGravityRefinementLevel (as a global? Passing it through the recursive call to Evolve Level)

-- Recomputing the SiblingList as it is needed

This will need testing. In the mean time, please do not use MaximumGravityRefinement, it will lead to incorrect results.

SubgridSizeAutoAdjust creates inefficient hierarchies when initializing an AMR problem

Original report by Nathan Goldbaum (Bitbucket: ngoldbaum, GitHub: ngoldbaum).


This issue is triggered when DetermineSubgridSizeExtrema is called during the very first RebuildHierarchy. Since the hierarchy doesn't exist yet, the NumberOfCells array is zero for all levels but level 0. This causes MaximumSubgridSize and MinimumSubgridEdge to be floored to the smallest allowed values.

For most problems, this will create inefficient AMR hierarchies dominated by small grids with large surface area to volume ratios. Since SubgridSizeAdjust is turned on by default, this means new users will tend to be bit by this issue, as they are more likely to be running test problems rather than cosmology simulations which have static initial hierarchies and will not have this issue.

My workaround is simply to turn off SubgridSizeAdjust during initialization.

I could see two ways to fix this, one would be to alter DetermineSubgridSizeExtrema to respect the MinimumSubgridEdge and MaximumSubgridSize parameters supplied by the user in their parameter file rather than overwriting them.

Another would be to patch RebuildHierarchy so that DetermineSubgridSizeExtrema is never called during initialization. This will still create tiny grids on a new AMR level the first time the code reaches it, one would also have to patch the call to DetermineSubgridSizeExtrema to pass in (for example) NumberOfCells[i] if NumberOfCells[i+1] is zero.

Build/answer test needs

Original report by Brian O'Shea (Bitbucket: bwoshea, GitHub: bwoshea).


When we update our testing infrastructure, we need to include:

  1. A test to see if the docs are built correctly ("make html" in the manual directory)
  2. Tests to see if various combinations of compilation options are used (i.e., 32-bit and 64-bit ints and floats)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.