Git Product home page Git Product logo

inm-6 / multi-area-model Goto Github PK

View Code? Open in Web Editor NEW
70.0 14.0 48.0 44.52 MB

A large-scale spiking model of the vision-related areas of macaque cortex.

License: Other

Python 90.66% TeX 3.58% Shell 0.03% R 1.42% Jupyter Notebook 4.30%
full-scale multi-area predictive-connectomics functional-connectivity theoretical-neuroanatomy simulation primates computational-neuroscience cerebral-cortex connectivity neural-network

multi-area-model's Introduction

Multi-scale spiking network model of macaque visual cortex

www.python.org NEST simulated License: CC BY-NC-SA 4.0

Model overview

This code implements the spiking network model of macaque visual cortex developed at the Institute of Neuroscience and Medicine (INM-6), Research Center Jülich. The model has been documented in the following publications:

  1. Schmidt M, Bakker R, Hilgetag CC, Diesmann M & van Albada SJ Multi-scale account of the network structure of macaque visual cortex Brain Structure and Function (2018), 223: 1409 https://doi.org/10.1007/s00429-017-1554-4

  2. Schuecker J, Schmidt M, van Albada SJ, Diesmann M & Helias M (2017) Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome. PLOS Computational Biology, 13(2): e1005179. https://doi.org/10.1371/journal.pcbi.1005179

  3. Schmidt M, Bakker R, Shen K, Bezgin B, Diesmann M & van Albada SJ (2018) A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque cortex. PLOS Computational Biology, 14(9): e1006359. https://doi.org/10.1371/journal.pcbi.1006359

The code in this repository is self-contained and allows one to reproduce the results of all three papers.

A video providing a brief introduction to the model and the code in this repository can be found here.

Try it on EBRAINS

Want to start using or simply run the model? Click the button below.
Please note: make sure you check and follow our User instructions, especially if you plan to make and save the changes, or if you simply need step-by-step instructions.
Try it on EBRAINS


User instructions

The Jupyter Notebook multi-area-model.ipynb illustrates the simulation workflow with a down-scaled version of the multi-area model. This notebook can be explored and executed online in the Jupyter Lab provided by EBRAINS without the need to install any software yourself.

  • Prerequisites: an EBRAINS account. If you don’t have it yet, register at register page. Please note: registering an EBRAINS account requires an institutional email.
  • If you plan to only run the model, instead of making and saving changes you made, go to Try it on EBRAINS; Should you want to adjust the parameters, save the changes you made, go to Fork the repository and save your changes.

Try it on EBRAINS

  1. Click Try it on EBRAINS. If any error or unexpected happens during the following process, please close the browser tab and restart the User instruction process again.
  2. On the Lab Execution Site page, select a computing center from the given list.
  3. If you’re using EBRAINS for the first time, click Sign in with GenericOAuth2 to sign in on EBRAINS. To do this, you need an EBRAINS account.
  4. Once signed in, on the Server Options page, choose Official EBRAINS Docker image 23.06 for Collaboratory.Lab (recommended), and click start.
  5. Once succeeded, you’re now at a Jupyter Notebook named multi-area-model.ipynb. Click the field that displays Python 3 (ipykernel) in the upper right corner and switch the kernel to EBRAINS-23.09.
  6. Congratulations! Now you can run the model. Enjoy!
    To run the model, click the Run on the title bar and choose Run All Cells. It takes several minutes until you get all results.
    Please note: every time you click the Try it on EBRAINS button, the repository is loaded into your home directory on EBRAINS Lab and it overrides your old repository with the same name. Therefore, make sure you follow the Fork the repository and save your changes if you make changes and want to save them.

Fork the repository and save your changes

With limited resources, EBRAINS Lab regularly deletes and cleans data loaded on the server. This means the repository on the EBRAINS Lab will be periodically deleted. To save changes you made, make sure you fork the repository to your own GitHub, then clone it to the EBRAINS Lab, and do git commits and push changes.

  1. Go to our Multi-area model under INM-6, create a fork by clicking the Fork. In the Owner field, choose your username and click Create fork. Copy the address of your fork by clicking on Code, HTTPS, and then the copy icon.
  2. Go to EBRAINS Lab, log in, and select a computing center from the given list.
  3. In the Jupyter Lab, click on the Git icon on the left toolbar, click Clone a Repository and paste the address of your fork.
  4. Now your forked repository of multi-area model is loaded on the server. Enter the folder multi-area-model and open the notebook multi-area-model.ipynb.
  5. Click the field that displays Python 3 (ipykernel) in the upper right corner and switch the kernel to EBRAINS-23.09.
  6. Run the notebook! To run the model, click the Run on the title bar and choose Run All Cells. It takes several minutes until you get all results.
  7. You can modify the exposed parameters before running the model. If you want to save the changes you made, press Control+S on the keyboard, click the Git icon on the most left toolbar, do git commits and push.
    To commit, on Changed bar, click the + icon, fill in a comment in the Summary (Control+Enter to commit) at lower left corner and click COMMIT.
    To push, click the Push committed changes icon at upper left which looks like cloud, you may be asked to enter your username and password (user name is your GitHUb username, password should be Personal access tokens you generated on your GitHub account, make sure you select the repo option when you generate the token), enter them and click Ok.
  8. If you would like to contribute to our model or bring your ideas to us, you’re most welcome to contact us. It’s currently not possible to directly make changes to the original repository, since it is connected to our publications.

Python framework for the multi-area model

The entire framework is summarized in the figure below: Sketch of the framework

We separate the structure of the network (defined by population sizes, synapse numbers/indegrees etc.) from its dynamics (neuron model, neuron parameters, strength of external input, etc.). The complete set of default parameters for all components of the framework is defined in multiarea_model/default_params.py.

A description of the requirements for the code can be found at the end of this README.


Preparations

To start using the framework, the user has to define a few environment variables in a new file called config.py. The file config_template.py lists the required environment variables that need to be specified by the user.

Furthermore, please add the path to the repository to your PYTHONPATH:

export PYTHONPATH=/path/to/repository/:$PYTHONPATH.


MultiAreaModel

The central class that initializes the network and contains all information about population sizes and network connectivity. This enables reproducing all figures in [1]. Network parameters only refer to the structure of the network and ignore any information on its dynamical simulation or description via analytical theory.

Simulation

This class can be initialized by MultiAreaModel or as standalone and takes simulation parameters as input. These parameters include, e.g., neuron and synapse parameters, the simulated biological time and also technical parameters such as the number of parallel MPI processes and threads. The simulation uses the network simulator NEST (https://www.nest-simulator.org). For the simulations in [2, 3], we used NEST version 2.8.0. The code in this repository runs with a later release of NEST, version 2.14.0, as well as NEST 3.0.

Theory

This class can be initialized by MultiAreaModel or as standalone and takes simulation parameters as input. It provides two main features:

  • predict the stable fixed points of the system using mean-field theory and characterize them (for instance by computing the gain matrix).
  • via the script stabilize.py, one can execute the stabilization method described in [2] on a network instance. Please see figures/SchueckerSchmidt2017/stabilization.py for an example of running the stabilization.

Analysis

This class allows the user to load simulation data and perform some basic analysis and plotting.

Analysis and figure scripts for [1-3]

The figures folder contains subfolders with all scripts necessary to produce the figures from [1-3]. If Snakemake (Köster J & Rahmann S, Bioinformatics (2012) 28(19): 2520-2522) is installed, the figures can be produced by executing snakemake in the respective folder, e.g.:

cd figures/Schmidt2018/
snakemake

Note that it can sometimes be necessary to execute snakemake --touch to avoid unnecessary rule executions. See https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#flag-files for more details.

Running a simulation

The files run_example_downscaled.py and run_example_fullscale.py provide examples. A simple simulation can be run in the following way:

  1. Define custom parameters. See multi_area_model/default_params.py for a full list of parameters. All parameters can be customized.

  2. Instantiate the model class together with a simulation class instance.

    M = MultiAreaModel(custom_params, simulation=True, sim_spec=custom_simulation_params)
    
  3. Start the simulation.

    M.simulation.simulate()
    

Typically, a simulation of the model will be run in parallel on a compute cluster. The files start_jobs.py and run_simulation.py provide the necessary framework for doing this in an automated fashion. The procedure is similar to a simple simulation:

  1. Define custom parameters

  2. Instantiate the model class together with a simulation class instance.

    M = MultiAreaModel(custom_params, simulation=True, sim_spec=custom_simulation_params)
    
  3. Start the simulation. Call start_job to create a job file using the jobscript_template from the configuration file and submit it to the queue with the user-defined submit_cmd.

Be aware that, depending on the chosen parameters and initial conditions, the network can enter a high-activity state, which slows down the simulation drastically and can cost a significant amount of computing resources.

Extracting connectivity & neuron numbers

First, the model class has to be instantiated:

  1. Define custom parameters. See multi_area_model/default_params.py for a full list of parameters. All parameters can be customized.

  2. Instantiate the model class.

    from multiarea_model import MultiAreaModel
    M = MultiAreaModel(custom_params)
    

The connectivity and neuron numbers are stored in the attributes of the model class. Neuron numbers are stored in M.N as a dictionary (and in M.N_vec as an array), indegrees in M.K as a dictionary (and in M.K_matrix as an array). To extract e.g. the neuron numbers into a yaml file execute

   import yaml
   with open('neuron_numbers.yaml', 'w') as f:
       yaml.dump(M.N, f, default_flow_style=False)

Alternatively, you can have a look at the data with print(M.N).

Simulation modes

The multi-area model can be run in different modes.

  1. Full model

    Simulating the entire networks with all 32 areas with default connectivity as defined in default_params.py.

  2. Down-scaled model

    Since simulating the entire network with approx. 4.13 million neurons and 24.2 billion synapses require a large amount of resources, the user has the option to scale down the network in terms of neuron numbers and synaptic indegrees (number of synapses per receiving neuron). This can be achieved by setting the parameters N_scaling and K_scaling in network_params to values smaller than 1. In general, this will affect the dynamics of the network. To approximately preserve the population-averaged spike rates, one can specify a set of target rates that is used to scale synaptic weights and apply an additional external DC input.

  3. Subset of the network

    You can choose to simulate a subset of the 32 areas specified by the areas_simulated parameter in the sim_params. If a subset of areas is simulated, one has different options for how to replace the rest of the network set by the replace_non_simulated_areas parameter:

    • hom_poisson_stat: all non-simulated areas are replaced by Poissonian spike trains with the same rate as the stationary background input (rate_ext in input_params).
    • het_poisson_stat: all non-simulated areas are replaced by Poissonian spike trains with population-specific stationary rate stored in an external file.
    • current_nonstat: all non-simulated areas are replaced by stepwise constant currents with population-specific, time-varying time series defined in an external file.
  4. Cortico-cortical connections replaced

    In addition, it is possible to replace the cortico-cortical connections between simulated areas with the options het_poisson_stat or current_nonstat. This mode can be used with the full network of 32 areas or for a subset of them (therefore combining this mode with the previous mode 'Subset of the network').

  5. Dynamical regimes

    As described in Schmidt et al. (2018) (https://doi.org/10.1371/journal.pcbi.1006359), the model can be run in two dynamical regimes - the Ground state and the Metastable state. The state is controlled by the value of the cc_weights_factor and cc_weights_I_factor parameters.

Test suite

The tests/ folder holds a test suite that tests different aspects of network model initialization and mean-field calculations. It can be conveniently run by executing pytest in the tests/ folder:

cd tests/
pytest

Requirements

Python 3, python_dicthash (https://github.com/INM-6/python-dicthash), correlation_toolbox (https://github.com/INM-6/correlation-toolbox), pandas, numpy, nested_dict, matplotlib (2.1.2), scipy, pytest, NEST 2.14.0 or NEST 3.0

Optional: seaborn, Sumatra

To install the required packages with pip, execute:

pip install -r requirements.txt

Note that NEST needs to be installed separately, see http://www.nest-simulator.org/installation/.

In addition, reproducing the figures of [1] requires networkx, python-igraph, pycairo and pyx. To install these additional packages, execute:

pip install -r figures/Schmidt2018/additional_requirements.txt

In addition, Figure 7 of [1] requires installing the infomap package to perform the map equation clustering. See http://www.mapequation.org/code.html for all necessary information.

Similarly, reproducing the figures of [3] requires statsmodels, networkx, pyx, python-louvain, which can be installed by executing:

pip install -r figures/Schmidt2018_dyn/additional_requirements.txt

The SLN fit in multiarea_model/data_multiarea/VisualCortex_Data.py and figures/Schmidt2018/Fig5_cc_laminar_pattern.py requires an installation of R and the R library aod (http://cran.r-project.org/package=aod). Without R installation, both scripts will directly use the resulting values of the fit (see Fig. 5 of [1]).

The calculation of BOLD signals from the simulated firing rates for Fig. 8 of [3] requires an installation of R and the R library neuRosim (https://cran.r-project.org/web/packages/neuRosim/index.html).

Contributors

All authors of the publications [1-3] made contributions to the scientific content. The code base was written by Maximilian Schmidt, Jannis Schuecker, and Sacha van Albada with small contributions from Moritz Helias. Testing and review was supported by Alexander van Meegen.

Citation

If you use this code, we ask you to cite the appropriate papers in your publication. For the multi-area model itself, please cite [1] and [3]. If you use the mean-field theory or the stabilization method, please cite [2] in addition. We provide bibtex entries in the file called CITATION.

If you have questions regarding the code or scientific content, please create an issue on github.

NEST simulated        HBP logo        FZJ logo

Acknowledgements

We thank Sarah Beul for discussions on cortical architecture; Kenneth Knoblauch for sharing his R code for the SLN fit (multiarea_model/data_multiarea/bbalt.R); and Susanne Kunkel for help with creating Fig. 3a of [1] (figures/Schmidt2018/Fig3_syntypes.eps).

This work was supported by the Helmholtz Portfolio Supercomputing and Modeling for the Human Brain (SMHB), the European Union 7th Framework Program (Grant 269921, BrainScaleS and 604102, Human Brain Project, Ramp up phase) and European Unions Horizon 2020 research and innovation program (Grants 720270 and 737691, Human Brain Project, SGA1 and SGA2), the Jülich Aachen Research Alliance (JARA), the Helmholtz young investigator group VH-NG-1028,and the German Research Council (DFG Grants SFB936/A1,Z1 and TRR169/A2) and computing time granted by the JARA-HPC Ver- gabegremium and provided on the JARA-HPC Partition part of the supercomputer JUQUEEN (Jülich Supercomputing Centre 2015) at Forschungszentrum Jülich (VSR Computation Time Grant JINB33), and Priority Program 2041 (SPP 2041) "Computational Connectomics" of the German Research Foundation (DFG).

multi-area-model's People

Contributors

albada avatar alexvanmeegen avatar didi-hou avatar jarsi avatar jasperalbers avatar jhnnsnk avatar jschuecker avatar morales-gregorio avatar shimoura avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multi-area-model's Issues

Two sets of parameters for initializing membrane potential

The default parameters file defines parameters for initializing the membrane potential of neurons in two places. First we have

sim_params.update(
{
'initial_state': {
# mean of initial membrane potential (in mV)
'V_m_mean': -58.0,
# std of initial membrane potential (in mV)
'V_m_std': 10.0
}
})

and then

neuron_params = {
# neuron model
'neuron_model': 'iaf_psc_exp',
# neuron parameters
'single_neuron_dict': single_neuron_dict,
# Mean and standard deviation for the
# distribution of initial membrane potentials
'V0_mean': -100.,
'V0_sd': 50.}

The comments in both places are very similar and it is not clear to me which of the two sets of parameters are actually used (from

if self.network.params['USING_NEST_3']:
it seems the second one; the first seems entirely unused).

I find the second set of values rather unphysiological with a mean well below typical GABA reversal potential (-90 mV if I am not wrong), and with one std deviation stretching to -150 mV.

With a spike threshold of -50 mV (

), for the first set of parameters (-58±10 mV) about 21% of all neurons will be initialized to superthreshold membrane potentials, for the second set (-100±50 mV) about 16%. These neurons will all spike simultaneously in the first time step. How sensible is this?

At least for benchmarking, a massive barrage of spikes in a single time step can lead to some problematic (because atypical) resizing of data structures. Since the membrane potential distribution must be zero at the threshold, would it not make more sense to initialize the membrane potential with a lognormal distributions stretching from V_th down towards -inf?

Some confusion in plotting figures

Hi, I was trying to implement the Fig5, so I downloaded the data from here, and define the variable original_data_path in `config.py'.
image
However, when I run the code, I've got an error:

TypeError: Not supported. Please store the matrix in a binary numpy file and define the path to the file as the parameter value

The error coms from the line 107 M = MultiAreaModel({})
The OS is ubuntu cluster.

Test suit issues

Hi,
I installed the model and all required packages (including NEST 2.14.0) on ubuntu 19.10. Running the test suite fails on most test scripts. The first error is:
multiarea_model.py:139
TypeError: Not supported. Please store the matrix in a binary numpy file and define the path to the file as the parameter value.
Apparently, the error is solved for all test scripts except for test_analysis.py if I replace the line
if self.params['connection_params']['K_stable'] is None:
with
if self.params['connection_params']['K_stable']==False:
The script test_analysis.py still fails with the error
FAILED test_analysis.py::test_analysis - ZeroDivisionError: division by zero
in the line:
synchrony = variance / mean

Is "for 'i' in target_pop" an error in this code occured in Model.py

 for target_area, source_area in product(area_list, area_list):
        if source_area != target_area:
            for target_pop, source_pop in product(population_list, population_list):
                if 'I' in target_pop:
                    synapse_weights_mean[target_area][target_pop][
                        source_area][source_pop] *= cc_weights_I_factor
                    synapse_weights_sd[target_area][target_pop][
                        source_area][source_pop] *= cc_weights_I_factor

Is "for 'i' in target_pop" an error in this code in Model.py? Inhibitory weights are enhanced by factor cc_weights_I_factor, but you choose the inhibitory target_pop as maniuplating object.

simulation environment of multi node

Do you have a recommended configuration tutorial for the multi node nest simulation environment? It can also be the brief steps of environment configuration and the required installation package.

Suggestions for improved README

Just two small suggestions for improvement to the Readme:

  • The section on the config.py file is easy to overlook. Maybe a header such as "Preparations" would make it more visible?
  • The Readme should also mention early on that there is a section on Requirements at the end of the Readme.

Missing package "aod" on EBRAINS

I just tried to run the downscaled model on EBRAINS (Jülich Site) following the "Try it on EBRAINS" button. After a few cells, this fails with the message

Error in library("aod") : there is no package called ‘aod’
Execution halted

This seems closely related to #31, but now with the package missing on EBRAINS.

Screenshot 2023-09-20 at 20 12 32

Informative error message for disagreeing hashes

As pointed out by @golosio, the file not found error caused by a mismatch in hashes between simulation and analysis is not helpful for the user. This has to be handled properly.

Maybe related: check the codebase before runtime for changes by the user. Doing this properly would need more thought.

Missing R package `aod`

When running the down-scaled version things break with a non-explaning error message:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-5-4ee3a90d4926> in <module>
----> 1 M = MultiAreaModel(network_params, simulation=True,
      2                    sim_spec=sim_params,
      3                    theory=True,
      4                    theory_spec=theory_params)
      5 p, r = M.theory.integrate_siegert()

~/multi-area-model/multiarea_model/multiarea_model.py in __init__(self, network_spec, theory, simulation, analysis, *args, **keywords)
    104                 json.dump(self.custom_params, f)
    105             # Execute Data script
--> 106             compute_Model_params(out_label=str(rand_data_label),
    107                                  mode='custom')
    108         else:

~/multi-area-model/multiarea_model/data_multiarea/Model.py in compute_Model_params(out_label, mode)
     59         print("Replacing base path with: %s" % basepath)
     60     # Load and process raw data
---> 61     process_raw_data()
     62     raw_fn = os.path.join(basepath, 'viscortex_raw_data.json')
     63     proc_fn = os.path.join(basepath, 'viscortex_processed_data.json')

~/multi-area-model/multiarea_model/data_multiarea/VisualCortex_Data.py in process_raw_data()
   1324                                 stdout=subprocess.PIPE)
   1325         out = proc.communicate()[0].decode('utf-8')
-> 1326         R_fit = [float(out.split('\n')[1].split(' ')[1]),
   1327                      float(out.split('\n')[1].split(' ')[3])]
   1328     except OSError:

IndexError: list index out of range

I've tracked this down to the call to Rscript SLN_logdensities.R returning an error which is not caught. Running the command manually on the command line reveals

$ Rscript SLN_logdensities.R .
Error in library("aod") : there is no package called ‘aod’
Execution halted

Problem found, but seems to be not easily to solve for non-R people:

$ R
> install.packages("oad")
[...]
Warning message:
packageoadis not available (for R version 3.6.0)

Unfortunately changing the version of R itself is not easily possible for me.
How can I continue the workflow?

ImportError: No module named config

I am having trouble running the model because of a missing config module.

Traceback (most recent call last):
  File "../../multiarea_model/data_multiarea/VisualCortex_Data.py", line 83, in <module>
    from config import base_path
ImportError: No module named config
lambda-quad:~/multi-area-model$ python run_example_downscaled.py 
Traceback (most recent call last):
  File "run_example_downscaled.py", line 4, in <module>
    from multiarea_model import MultiAreaModel
  File "/home/russell/multi-area-model/multiarea_model/__init__.py", line 1, in <module>
    from . import default_params
  File "/home/russell/multi-area-model/multiarea_model/default_params.py", line 14, in <module>
    from config import base_path
ImportError: cannot import name base_path

Issue in reproducing figures

in the directory figures/Schmidt2018_dyn, after setting LOAD_ORIGINAL_DATA = False in all python scripts and in the Snakefile, and setting apptopriately the variable chu2014_path in helpers.py, running
$ snakemake
I got the error:
TypeError in line 94 of /home/golosio/multi-area-model/multi-area-model/figures/Schmidt2018_dyn/Snakefile:
join() argument must be str or bytes, not 'list'
File "/home/golosio/multi-area-model/multi-area-model/figures/Schmidt2018_dyn/Snakefile", line 94, in
File "/usr/lib/python3.7/posixpath.py", line 94, in join
File "/usr/lib/python3.7/genericpath.py", line 153, in _check_arg_types

Apparently, ifin the Snakefile I replace
os.path.join(DATA_DIR, SIM_LABELS['Fig3'], ...
with
os.path.join(DATA_DIR, SIM_LABELS['Fig3'][0], ...
in all places where it occurs, the previous error does not appear, however I receive another error:
Building DAG of jobs...
MissingInputException in line 37 of /home/golosio/multi-area-model/multi-area-model/figures/Schmidt2018_dyn/Snakefile_preprocessing:
Missing input files for rule pop_rates:
/home/golosio/multi-area-model/data/157be7f2609cd1843099e7a6b4e8b218/recordings/spikes_VIP_5E.npy
/home/golosio/multi-area-model/data/157be7f2609cd1843099e7a6b4e8b218/recordings/spikes_V1_6E.npy
......

NEST compilation issues in connection with conda environment

This issue deal with our compilation problems with NEST and the conda environment. This is our current status of knowledge:

We found two different solutions for the simulations on the one hand and mean-field theory calculation on the other hand:

Simulations

Deactivate conda environment, execute cmake with explicite python paths of the conda environment.

MFT

Activate conda environment and compile NEST inside this conda environment, but without MPI, OpenMP, readline and ncurses, with such a cmake line:

cmake -Dwith-gsl=$HOME/miniconda3/envs/multiarea_model -Dwith-readline=OFF -Dwith-ltdl=OFF -Dwith-openmp=OFF -DCMAKE_INSTALL_PREFIX:PATH=$HOME/nest-2.14.0-conda-mam

For your interest: @AlexVanMeegen , @albada , @heplesser, @jougs, @terhorstd

Issue running the analysis using a separate script from that used for simulation

When running the analysis using a separate script from that used for the simulation (as I think would be a common approach when the simulation is done in a cluster) I encounter a problem.
The method init() of the class Simulation can take as input argument sim_spec the label of a previous simulation.
As I understand, those lines:

        if isinstance(sim_spec, dict):
            check_custom_params(sim_spec, self.params)
            self.custom_params = sim_spec
        else:
            fn = os.path.join(data_path,
                              sim_spec,
                              '_'.join(('custom_params',
                                        sim_spec)))
            with open(fn, 'r') as f:
                self.custom_params = json.load(f)['sim_params']

        nested_update(self.params, self.custom_params)

        self.network = network
        self.label = dicthash.generate_hash_from_dict({'params': self.params,
                                                       'network_label': self.network.label})

check if sim_spec is a dict, otherwise it is assumed is a string which is used to build the path were simulation data are stored.
However the last line self.label = dicthash.generate_hash_from_dict(...) generates a new label, no matter if it was already specifed by sim_spec.
The following code seem to work for me:

        if isinstance(sim_spec, dict):
            check_custom_params(sim_spec, self.params)
            self.custom_params = sim_spec
            self.label = None
        else:
            fn = os.path.join(data_path,
                              sim_spec,
                              '_'.join(('custom_params',
                                        sim_spec)))
            with open(fn, 'r') as f:
                self.custom_params = json.load(f)['sim_params']
            self.label = sim_spec

        nested_update(self.params, self.custom_params)

        self.network = network
        if self.label == None:
            self.label = dicthash.generate_hash_from_dict({'params': self.params,

Calculation of LvR and correlation coefficients of ground state

Thanks to you awesome help with #12, I've been comparing some of our simulation runs with the 33fb5955558ba8bb15a3fdce49dfd914682ef3ea dataset and are having some issues especially with the populations with low firing rates. To try and narrow down where this is coming from, I tried re-using the analysis code from the repository on the FEF spike data from the 33fb5955558ba8bb15a3fdce49dfd914682ef3ea dataset as follows:

from correlation_toolbox import helper as ch
import numpy as np

# **NOTE** this is heavily based off the analysis code from the paper
def load(filename, duration_s):
    tmin = 500.
    subsample = 2000
    resolution = 1.

    spikes = np.load(filename)
    ids = np.unique(spikes[:, 0])
    dat = ch.sort_gdf_by_id(spikes, idmin=ids[0], idmax=ids[0]+subsample+1000)
    bins, hist = ch.instantaneous_spike_count(dat[1], resolution, tmin=tmin, tmax=duration_s * 1000.0)
    rates = ch.strip_binned_spiketrains(hist)[:subsample]
    cc = np.corrcoef(rates)
    cc = np.extract(1-np.eye(cc[0].size), cc)
    cc[np.where(np.isnan(cc))] = 0.
    return np.mean(cc)


# Values from the json file on gnode
dataset_fef_6e_corr = 0.0004061593782540619
dataset_fef_5i_corr = 0.0020706629333541817

duration_s = 10.5

# Population sizes
num_fef_5i = 3721
num_fef_6e = 16128

# Load data
nest_fef_5i_corr = load("33fb5955558ba8bb15a3fdce49dfd914682ef3ea-spikes-FEF-5I.npy", duration_s)
nest_fef_6e_corr = load("33fb5955558ba8bb15a3fdce49dfd914682ef3ea-spikes-FEF-6E.npy", duration_s)

print("FEF 5I corr coeff - NEST:%f, Dataset:%f" % (nest_fef_5i_corr, dataset_fef_5i_corr))
print("FEF 6E corr coeff - NEST:%f, Dataset:%f" % (nest_fef_6e_corr, dataset_fef_6e_corr))

The output shows fairly significant differences between the versions from the json file in the same directory:

FEF 5I corr coeff - NEST:0.001910, Dataset:0.002071
FEF 6E corr coeff - NEST:0.000156, Dataset:0.000406

Am I doing something dumb or missing something?

Thanks in advance for your help!

About "config_template.py"

Hi, I am trying the code. I found it need config_template.py to make my own config file.
Could you let me know where can I find the config_template.py file?

Correlation-toolbox package missing

The external package correlation-toolbox used in computing correlation coefficient and other modules are missing in the repo. It seems the toolbox is from correlation-toolbox. However, this toolbox is not released as a package on PyPI.

Not sure if the functions in the correlation-toolbox have been incorporated into scientific packages like Scipy or Elephant already since it has actually been very commonly used in many other models.

About 'K_stable.npy' and 'fullscale_rates.json'

Hi, I don't konw how the 'K_stable.npy' and 'fullscale_rates.json' files were generated. I guess the 'K_stable.npy' file were generated when ran the 'stabilization.py' file? But how did you create the another one?

Brain areas of macaque atlas

Hi, I was trying to plot the areas that the paper simulated, it consists of 32 areas, and the atlas used is from here as described in the paper. While only 14 areas are included in the atlas's label in F99 space, so how to solve this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.