Git Product home page Git Product logo

nsuite's People

Contributors

akuesters avatar bcumming avatar halfflat avatar noraabiakar avatar thorstenhater avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

nsuite's Issues

Better arbor/nsuite integration.

The goal is to replace the deprecated validation tests that are currently in the Arbor tree with NSuite validation tests applied to an Arbor build.

In order for this to work we need to be able to:

  • Pass information about an existing Arbor build (as opposed to install) to NSuite's install script.
  • Adapt the build-arbor script so that it can use an explicit path to arbor libraries and include paths, possibly mocking an arbor-config.cmake module or making a FindArbor.cmake.
  • Make it easy to rebuild/relink code in nsuite against an externally built library.

Reduce duplication of functionality in validation test run scripts.

Currently each script is responsible for determining the correct output directory, reporting missing or erroneous tests, capturing or redirecting stdout and stderr from implementations, writing a pass/fail status file, and reporting/pretty-printing the status to stdout.

Much of this functionality could be lifted to the run-validation.sh script, reducing redundancy and lowering the burden of test implementation:

  • Determine (and create as required) output directory.
  • Saving script stdout/stderr to a standard location.
  • Saving test status to file.
  • Reporting test status to stdout.

Status from the run script can be determined by exit value, e.g.

  • 0 => success
  • 1 => failure
  • 2 => execution error
  • 3 => missing implementation

Arbor fails to compile with Intel compiler

Current Arbor master compiles fine with GCC 8.3.0, but using ICC 19.0.3.199, I get the following error:

[ 52%] Building CXX object arbor/CMakeFiles/arbor.dir/tree.cpp.o
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite-dam-gpu/build/arbor/arbor/mechcat.cpp(458): error: call of an object of a class type without appropriate operator() or conversion functions to pointer-to-function type
                  self(self, p->parent, over);
                  ^
          detected during instantiation of function "lambda [](auto &, const std::__cxx11::string &, arb::mechanism_overrides &)->void [with <auto-1>=lambda [](auto &, const std::__cxx11::string &, arb::mechanism_overrides &)->void]" at line 473

I am building via the nsuite/install_local.sh script.

Unfortunately, on the system I am working on netCDF is only available with the Intel compiler and ParastationMPI, so I rather need to use the Intel compiler.

Fix systems/daint-gpu.sh for validation

The validation tests don't configure correctly on Daint. Looks like a Python module issue.

> ./install-local.sh arbor -e systems/daint-gpu.sh
...
==  ARBOR: saving environment
Configuring arbor-rc-expsyn.
Error: refer to log file '/users/bcumming/nsuite/build/validation/arbor-rc-expsyn/config.log'

==  Installation finished

bcumming@daint101:nsuite > cat /users/bcumming/nsuite/build/validation/arbor-rc-expsyn/config.log
-- The CXX compiler identification is GNU 6.2.0
-- Cray Programming Environment 2.5.15 CXX
-- Check for working CXX compiler: /opt/cray/pe/craype/2.5.15/bin/CC
-- Check for working CXX compiler: /opt/cray/pe/craype/2.5.15/bin/CC -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.28") 
-- Checking for modules 'GLOBAL;netcdf'
--   No package 'GLOBAL' found
--   No package 'netcdf' found
CMake Error at /apps/daint/UES/jenkins/6.0.UP07/gpu/easybuild/software/CMake/3.12.0/share/cmake-3.12/Modules/FindPkgConfig.cmake:436 (message):
  A required package was not found
Call Stack (most recent call first):
  /apps/daint/UES/jenkins/6.0.UP07/gpu/easybuild/software/CMake/3.12.0/share/cmake-3.12/Modules/FindPkgConfig.cmake:602 (_pkg_check_modules_internal)
  CMakeLists.txt:5 (pkg_check_modules)


-- Configuring incomplete, errors occurred!

Test

For new slack integration

Remove large file from git history.

I accidentally put a binary file in the repository, which was updated every time we recompiled Arbor. This has bloated the repository over time.

To remove it, I need to rewrite Git history, so I would like permission from @halfflat before I proceed. Everyone will have to update their branches, and make sure they rebase.

Steps to fix the problem

git clone https://github.com/arbor-sim/nsuite.git nsfix
cd nsfix

# remove all mentions of the file from git history
git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch benchmarks/engines/busyring/arbor/run' --prune-empty --tag-name-filter cat -- --all

# remove cached local copies
git for-each-ref --format='delete %(refname)' refs/original | git update-ref --stdin
git reflog expire --expire=now --all
git gc --prune=now

# uncomment the following to force changes to GitHub
# git push origin --force --all

No `date -I` on Mac OS X

We can't use date -Isec to generate ISO 8601 timestamps on Mac OS X.

Proposed work-around:

  • Use -u to set UTC
  • Use a format string to match ISO 8601 output (with Z suffix for UTC).

Add CoreNEURON-capable validation test.

As we can only get spike times out of CoreNEURON, make a validation test that compares spike times.

Simplest would probably be to look at time to spike of a soma with exp2syn synapse activated by an artificially inserted spike. To extend it slightly, we could have two neurons and a connection; one is triggered artificially, which then triggers the next via the connection.

Separate simulator outputs in benchmarks

Put the output of benchmarks (meters, stdout, etc) from each simulator into separate paths, instead of the single shared path they currently use

For example, currently the output of all simulation engines for a single model and config would go into paths like:

output/ring/small/arbor/
output/ring/small/neuron/
output/ring/small/coreneuron/

Incorporate validation test building into `install-local.sh`

We shouldn't need a separate build step for the validation test code, so the build_validation_models.sh script should be invoked from within install-local.sh.

Should it always be run, independent of the simulator list passed to the script? While currently the only validation test code that needs to be built is an Arbor test implementation, in principle generic analysis or reference data generation code could also be built.

Configured environments may need LD_LIBRARY_PATH set.

I encountered a runtime error attempting to use the config/env_neuron.sh environment set up to run a NEURON python script within nsuite, which indicated that the neuron python module could not find a symbol in libnrnpython.so.0 that was from my local install of an earlier version of NEURON.

The environment scripts don't set LD_LIBRARY_PATH, but I think they may have to add .../install/lib and .../install/lib64 to the path so that e.g. NEURON finds the right versions of its libraries.

CoreNEURON cell grouping

In the current instantiation of CoreNEURON support, benchmarks are run with a suboptimal grouping of cells for GPU execution. This is partly due to the sharing of model set up across NEURON and CoreNEURON.

The performance can be addressed by making one large group of cells for CoreNEURON. This may require more fundamental changes to the way the benchmark input data is generated for CoreNEURON.

Splash page for docs

Short splash page that

  • states aims / uses of nsuite
  • acknowledges HBP funding
  • links through to installation, and bench+validate sections

Consistent benchmark output

All benchmark results should be stored in consistent JSON file format for all simulation engines, for post processing by users of the framework.

Arbor and NEURON do this already.
However CoreNeuron... well, CoreNeuron.

The simplest approach is to use grep and awk. We might even use something simpler than JSON.

Add documentation for validation component

Need docs for:

  • How to build and run the tests.
  • Directory layout and file name conventions.
  • How to add a new validation model.
  • Utility shell functions.
  • NetCDF analysis tools.

Make output path user configurable.

The <top-directory>/output directory is currently hard-coded into the scripts.

Proposal:

  • Allow an environment variable or command line script option to override this path.
  • Allow the components in the subdirectory paths also to be customizable via some sort of format string.

Example set up:

  • NS_OUTPUT_DIR points to the top level output directory (default <top-directory>/output)
  • NS_VALIDATION_PATH_FORMAT, NS_BENCHMARK_OUTPUT_PATH_FORMAT override where the respective output data is written using a template syntax, e.g. %T/%S-%M-%P might indicate that the validation output for the Arbor rc-expsyn model with default parameter set be written in the subdirectory <iso-8601-timestamp>/arbor-rc-expsyn-default.

Fix various typos

Found various typos during the usage.

---
 docs/benchmarks.rst    | 2 +-
 docs/validation.rst    | 2 +-
 scripts/build_arbor.sh | 2 +-
 scripts/environment.sh | 6 +++---
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/docs/benchmarks.rst b/docs/benchmarks.rst
index e6ddbcb..9f73fa7 100644
--- a/docs/benchmarks.rst
+++ b/docs/benchmarks.rst
@@ -50,5 +50,5 @@ Arbor has a standardised way of measuring and reporting metrics using what it ca
 NSuite provides a Python module in ``common/python/metering.py`` that offers the
 same functionality in Python, which can be used for the NEURON benchmarks.
 
-With this standard output format, the ``scrpts/csv_bench.sh`` script can be used to automatically generate the CSV output.
+With this standard output format, the ``scripts/csv_bench.sh`` script can be used to automatically generate the CSV output.
 
diff --git a/docs/validation.rst b/docs/validation.rst
index 74eb06e..ec0e748 100644
--- a/docs/validation.rst
+++ b/docs/validation.rst
@@ -18,7 +18,7 @@ Validation models are set up in the NSuite source tree according to a specific
 layout.
 
 Data and scripts required to run a particular validation model *MODEL* will all
-be found under in the ``validation/MODEL`` directory. At minimum, there must be
+be found under the ``validation/MODEL`` directory. At minimum, there must be
 an executable run script called ``run`` (see below) and a default parameter
 set ``default.param``. Any additional parameter sets must have a ``.param``
 suffix.
diff --git a/scripts/build_arbor.sh b/scripts/build_arbor.sh
index 1ebec2b..3009f86 100644
--- a/scripts/build_arbor.sh
+++ b/scripts/build_arbor.sh
@@ -7,7 +7,7 @@ arb_checked_flag="${arb_repo_path}/checked_out"
 out="$ns_build_path/log_arbor"
 rm -f "$out"
 
-# aquire the code if it has not already been downloaded
+# acquire the code if it has not already been downloaded
 if [ ! -f "$arb_checked_flag" ]; then
     rm -rf "$arb_repo_path"
 
diff --git a/scripts/environment.sh b/scripts/environment.sh
index a48548c..a7525ee 100644
--- a/scripts/environment.sh
+++ b/scripts/environment.sh
@@ -19,7 +19,7 @@ set_working_paths() {
     export ns_pyvenv_path="$ns_build_path/pyvenv"
 }
 
-# Sets up the default enviroment.
+# Sets up the default environment.
 # Variables defined here use the prefix ns_
 default_environment() {
     set_working_paths
@@ -28,7 +28,7 @@ default_environment() {
     case "$OSTYPE" in
       linux*)   ns_system=linux ;;
       darwin*)  ns_system=apple ;;
-      *)        err "unsuported OS: $OSTYPE"; exit 1 ;;
+      *)        err "unsupported OS: $OSTYPE"; exit 1 ;;
     esac
 
     # Choose compiler based on OS
@@ -104,7 +104,7 @@ default_environment() {
     ns_cnrn_compiler_flags=-O2
 }
 
-# Attempts to detect harware resouces available on node
+# Attempts to detect hardware resouces available on node
 # These default values are probably acceptable for laptop and desktop systems.
 # For detailed benchmarking, these defaults can be overridden.
 default_hardware() {
-- 
2.17.1

Building nsuite fails

I just tried to build nsuite. It failed with the following message in log_arbor. Building Arbor including examples and unit tests works fine.

-- Build files have been written to: /p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite-cn-gcc/build/busyring_arbor
Scanning dependencies of target ring
[ 50%] Building CXX object CMakeFiles/ring.dir/ring.cpp.o
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/benchmarks/engines/busyring/arbor/ring.cpp: In member function 'virtual arb::probe_info ring_recipe::get_probe(arb::cell_member_type) const':
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/benchmarks/engines/busyring/arbor/ring.cpp:133:14: error: 'segment_location' is not a member of 'arb'
         arb::segment_location loc(0, 0.0);
              ^~~~~~~~~~~~~~~~
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/benchmarks/engines/busyring/arbor/ring.cpp:133:14: note: suggested alternative: 'segment_ptr'
         arb::segment_location loc(0, 0.0);
              ^~~~~~~~~~~~~~~~
              segment_ptr
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/benchmarks/engines/busyring/arbor/ring.cpp:135:61: error: 'loc' was not declared in this scope
         return arb::probe_info{id, kind, cell_probe_address{loc, kind}};
                                                             ^~~
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/benchmarks/engines/busyring/arbor/ring.cpp:135:61: note: suggested alternative: 'log'
         return arb::probe_info{id, kind, cell_probe_address{loc, kind}};
                                                             ^~~
                                                             log

Consistency tweaks for validation component

For consistency with benchmark component:

  • Pretty-print build and configure notifications in build_validation_models.sh to match benchmark build output.
  • Move validation artefacts to output/validation/....

Python virtualenv xarray build may only build netcdf3 support

On artix (Arch linux derivative), virtual environment set up currently builds xarray only with netcdf3 support; failure to read netcdf4 file gives helpful error message:

If this is a NetCDF4 file, you may need to install the
netcdf4 library, e.g.,

        $ pip install netcdf4

Standardise validation messages in `install-local.sh`

The installation of Arbor using install-local.sh outputs:

==  ARBOR: building benchmark models

==  ARBOR: busyring benchmark
==  ARBOR: cmake
==  ARBOR: make
==  ARBOR: install
==  ARBOR: saving environment
Configuring arbor-rc-exp2syn-spike.
Building arbor-rc-exp2syn-spike.
Configuring arbor-rc-expsyn.
Building arbor-rc-expsyn.

==  Installation finished
  • Make the validation configure/build output consistent with the benchmarking
  • Move it before saving the environment

nsuite benchmark script problem

Running the benchmarks in nsuite leads to syntax errors when running the script parsing the results:

[plesser1@dp-cn26 nsuite_src]$ ./run-bench.sh arbor  --prefix=$HOME/work/Work/Arbor/nsuite-cn-gcc --model=ring --config=small
==  NSuite benchmark runner

==  models:    ring
==  configs:   small

==  benchmark: arbor ring-small
  cells compartments    wall(s)  throughput  mem-tot(MB) mem-percell(MB)
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/scripts/bench_output.sh: line 15: printf: 0.022: invalid number
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/scripts/bench_output.sh: line 15: printf: 0.022: invalid number
/p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite_src/scripts/bench_output.sh: line 15: printf: .022: invalid number
      2           2      90.000        90.0      0           0       2.000        90.9      0           0       0.000         0.0(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
       0.000       0.000

I paste the pertaining output file below.

srun -n2 -N1 -c1 arbor-busyring /p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite-cn-gcc/input/benchmarks/ring/small/run_2_2.json /p/home/jusers/plesser1/deep/work/Work/Arbor/nsuite-cn-gcc/output/benchmark/ring/small/arbor
gpu:      no
threads:  1
mpi:      no
ranks:    1

gpu:      no
threads:  1
mpi:      no
ranks:    1

cell stats: 2 cells; 10 segments; 90 compartments.
running simulation
cell stats: 2 cells; 10 segments; 90 compartments.
running simulation

32 spikes generated at rate of 6.25 ms between spikes

---- meters -------------------------------------------------------------------------------
meter                         time(s)      memory(MB)
-------------------------------------------------------------------------------------------
model-init                      0.002           0.093
model-run                       0.022           0.001
meter-total                     0.024           0.094

32 spikes generated at rate of 6.25 ms between spikes

---- meters -------------------------------------------------------------------------------
meter                         time(s)      memory(MB)
-------------------------------------------------------------------------------------------
model-init                      0.002           0.093
model-run                       0.022           0.001
meter-total                     0.023           0.094

Virtualenv for python dependencies

The validation tests currently require that the user pre-install required python modules (xarray and scipy). A better? solution would be to set up a requirements file for python modules and use (python3) venv to set up a virtual environment and pip to install the requirements in it, and use that as a base for executing the tests.

Determine framework and data formats for validation component

I'll expand in a proposal on the wiki, but I expect validation tests all to be similar in structure:

  1. There are a number of models, corresponding to some sort of physical simulation to be performed over a set of parameters (the models themselves might also be parameterized.)
  2. There will be some implementation of each model for each applicable simulator.
  3. Optional reference data will be retrieved or constructed [possibly depending on the output of the simulator].
  4. An analysis will be performed on the output, optionally against the reference data, producing summary data specific to the model.
  5. The summary data will be assessed as pass/fail in the validation.

For visibility, testing, and general interoperability between the simulator output, the reference data, the analysis output and the summary assessment, there should be a limited number of data schemata (maybe just one) and offline representations of those schemata. I'm currently leaning towards NetCDF4 for the latter, as it's portable, can be self-documenting, can round-trip via ncdump/ncgen to a textual description (CDL), and is well supported (via third-party packages) in Python, R, and Julia.

For ease of deployment, I'd like to minimize the amount of code that has to be compiled — allowing us to run the suite against e.g. NEURON without having to compile any extra code. This would mean writing the model implementations and analysis code in something like Python (with e.g. an xarray and netcdf4 dependency if we use NetCDF4 as our data representation).

A single validation test would comprise then a shell script which invokes the phases above.

Specifying simulator configuration options for validation tests

There is utility in comparing the results of validation tests based on simulator configuration, e.g.

  • NEURON: second order or first order solver;
  • Arbor: event binning, synapse coalescing.

I'm proposing that simulator names can be suffixed with one or more strings of the form :feature; these constitute part of the simulator name as far as output paths are concerned, but these are stripped off and passed on to the test implementations to be handled as they see fit (presumably throwing a tanty if they don't recognize the feature.)

Installation guide

The installation guide

  • obtaining nsuite (git clone)
  • the run-build.sh script
  • customization of builds using --env flag for run-build.sh

Benchmark executables are installed in source directory.

Having the executables installed there instead of a binary directory such as install/bin breaks the configurable prefix installation. (It also makes it unnecessarily harder to keep the repo tree clean.)

Proposed fix:

  • Benchmark executables get their own unique name and are installed in the binary directory.
  • The benchmark runners are amended to use that name.

Richer validation output

The pass/fail output from a validation run is too simple, really, to be very useful. The validation runs should:

  • Catch definitely wrong behaviour.
  • Allow comparisons of fidelity between builds and simulators.

Proposal:

  • Make the pass criterion for the tests quite lax, so that FAIL really means that something went wrong.
  • Produce a 'standard' report output from the validation tests. This would follow a well defined schema and contain a list of derived values of interest, their pass thresholds (and less than, greater than, etc.), and metadata for sim and nsuite version.
  • Manually running the validation tests in a new 'verbose' mode would output a human- and script- readable version of this data.
  • Reports from multiple tests in the same validation run should be able to be combined and presented in a form that is easy for programmatic consumption, e.g. for presentation in Jenkins or other tools.

Decide on NetCDF representation for metadata in validation run outputs.

Currently metadata (such as parameter set values) are written as scalars in the NetCDF output. An alternative is to use attributes, which would allow a distinction between data and metadata that in turn would make tools such as comparex focus on comparing just the data, not the metadata.

A standard set of attributes/metadata should be determined. These would not be mandatory for validation tests, but would form a common convention. As an example, the convention might state that the name of the simulator used to produce the output be stored in a global string attribute, etc.

Multi-compartment validation models

Current validation models are single compartment, and so do not exercise any of the cabley parts of the simulators.

Propose adding as next priorities:

  1. Rallpack-1 test.
  2. 'Transverse' view of Rallpack-1 model (fix t, examine voltage along cable).
  3. Tapered cable versions of 1&2.

The tapered cable will require either the use of an independent numerical solver for the PDE, or the implementation of the (nearly-) analytic model.

nsuite validation fails—how to proceed?

On one system I tested, nsuite validation results in

==  Running validation for arbor:
[FAIL] arbor rc-exp2syn-spike/default
[PASS] arbor rc-expsyn/default
[PASS] arbor rc-expsyn/highcurrent

The run.out file is

spike.abserr<0.03: fail

I am unsure how to proceed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.