nest / nest-simulator Goto Github PK
View Code? Open in Web Editor NEWThe NEST simulator
Home Page: http://www.nest-simulator.org
License: GNU General Public License v2.0
The NEST simulator
Home Page: http://www.nest-simulator.org
License: GNU General Public License v2.0
Right now, the content written to stdout (and maybe stderr?) during the whole build and test process on Travis is just put into a single log file. Access to separate build artifacts is only possible by asking our GitHub team for a special build with output to S3.
Maybe life could be made a little bit easier by structuring the output to stdout during Travis builds in a more explicit way. This would require changes in ".travis.yml" and in "build.sh".
Variation 1: Introduce markers in stdout which contain file paths and names and provide in addition a post-processing script which can parse such a log and convert it into a directory structure with separate files.
Variation 2: Redirect all output to stdout and stderr (if possible?) during the build process to temporary files; from these files a structured tar file is created at the end of the build process and then written to stdout to get its contents into the Travis build log.
Additional suggestions are welcome... :-)
File configure.ac
was not imported into GitHub with the rest of examples/MyModule
. Thus, MyModule cannot be built.
As first reported by Mario Mulansky on NEST User (17 July 2015), several testsuite tests fail when compiling NEST under OSX using the clang compiler.
To reproduce:
../src/configure --prefix=
pwd/install --without-openmp
The following tests fail:
unittests/test_aeif_cond_alpha_multisynapse.sli
... Failed: segmentation faultunittests/test_mip_corrdet.sli
... Failed: segmentation faultunittests/test_recorder_close_flush.sli
... Failed: missed C++ assertionregressiontests/ticket-80-175-179.sli
... Failed: segmentation faultnest.tests.test_connect_distributions
testsIn http://www.nest-simulator.org/connection_management/ > inspecting connections
The first line
GetConnections(source=None, target=None, model=None)
should be changed to
GetConnections(source=None, target=None, synapse_model=None)
[see https://github.com/nest/nest-simulator/blob/master/pynest/nest/hl_api.py#L777]
Both iaf_neuron and iaf_psc_alpha are implementations of a leaky integrate-and-fire model with alpha-function shaped synaptic currents, but the documentation differs between these and does not explain why both models are needed.
https://trac.nest-initiative.org/trac/wiki/CodeReorganisationJune2006 states: "iaf_neuron has been retired. Use iaf_psc_alpha instead."
I'm having trouble getting a compiled pynestkernel library on OSX. Standard installation doesn't even attempt to compile it and put it in the build directory, so I get:
File "/Users/rgerkin/Desktop/nest-2.6.0/pynest/nest/__init__.py", line 52, in <module>
from . import pynestkernel as _kernel
ImportError: cannot import name 'pynestkernel'
due to the file being missing. The installation script isn't even trying to compile it to a shared library (.so)
If I try to cythonize it myself, I can get a .cpp, but I can never get it to compile, although perhaps there is magic set of flags that will work.
Does anyone have a reproducible recipe for installation in OSX 10.9?
parrot_neuron
currently emits spikes using a loop
for ( ulong_t i_spike = 0; i_spike < current_spikes_n; i_spike++ )
network()->send( *this, se, lag );
This is inefficient---why don't we use multiplicity and make just a single network->send()
call?
I just wanted to let the Community know that one of the PyNEST examples executed successfully
on an ARMv8 platform (Fedora 22 aarch64).
$ python ~/projects/nest-simulator/pynest/examples/CampbellSiegert.py
-- N E S T --
Copyright (C) 2004 The NEST Initiative
Version 2.8.0-git Oct 25 2015 01:32:57
This program is provided AS IS and comes with
NO WARRANTY. See the file LICENSE for details.
Problems or suggestions?
Visit http://www.nest-simulator.org
Type 'nest.help()' to find out more about NEST.
Oct 25 01:38:23 Network::clear_models [Info]:
Models will be cleared and parameters reset.
mean membrane potential (actual / calculated): -57.8512478094 / -57.8189416312
variance (actual / calculated): 0.687992608406 / 0.689739852871
firing rate (actual / calculated): 0.2 / 0.289849301849
iaf_psc_exp_multisynapse::update()
contains a number of "not sure about this" comments. It needs review and proper testing urgently.
iaf_psc_exp
handles current input supplied via CurrentEvent
s differently than other neuron models. Input via rport 0 is handled the normal way, while input via rport 1 is filtered as excitatory synaptic input.
This behavior is documented in one sentence, but that sentence is too little visible for an additional feature such at this. It should at least be on a line of its own, probably be Remark
.
I am also struggling to understand the physical basis of this filtering. An arriving spike releases transmitter vesicles which dock at receptors and thus evoke an input current. The exponential decay of the current captures the process in which "all" channels open instantly and then close over time as transmitter molecules detach again. Thus, there is a physical basis for a current that persists beyond the arrival time of the spike.
But when we inject currents via CurrentEvent
s, we model injection of currents via an electrode. Consider now a case where the electrode injects current only during a single time step. Then, immediately after that time step, the physical input current to the neuron is zero. But in the "filtered current" model, this current would persist as an exponentially decaying current for several milliseconds. What is the model behind this? It should be explained in the documentation. If one just wants to have a possiblity of injecting a low-pass filtered current into a neuron, one could use step_current_generator
, setting the current amplitudes to filtered values.
@jschuecker, could you take a look at this?
While I am executing a test program on K computer from
testsuite/manualtests/ticket-458
I get:
jwe1050i-w The hardware barrier couldn't be used and continues processing using the software barrier.
taken to (standard) corrective action, execution continuing.
jwe1603i-w The invalid memory is freed.
(Address:0 Free(function:std::basic_ifstream<char, std::char_traits<char>>::~basic_ifstream() line:0))
error occurs at _ZNSt14basic_ifstreamIcSt11char_traitsIcEED1Ev loc 0000000000ae1610 offset 0000000000000090
_ZNSt14basic_ifstreamIcSt11char_traitsIcEED1Ev at loc 0000000000ae1580 called from loc 0000000000d6c944 in _ZNK10SLIStartup9checkpathERKSsRSs
_ZNK10SLIStartup9checkpathERKSsRSs at loc 0000000000d6c340 called from loc 0000000000d718fc in _ZN10SLIStartup4initEP14SLIInterpreter
_ZN10SLIStartup4initEP14SLIInterpreter at loc 0000000000d70a00 called from loc 0000000000d58df4 in _ZN9SLIModule7installERSoP14SLIInterpreter
_ZN9SLIModule7installERSoP14SLIInterpreter at loc 0000000000d58d80 called from loc 0000000000c1b40c in _ZN14SLIInterpreter9addmoduleEP9SLIModule
_ZN14SLIInterpreter9addmoduleEP9SLIModule at loc 0000000000c1b3c0 called from loc 000000000011da38 in _Z11neststartupiPPcR14SLIInterpreterRPN4nest7NetworkE
_Z11neststartupiPPcR14SLIInterpreterRPN4nest7NetworkE at loc 000000000011d880 called from loc 0000000000111ea0 in main
main at loc 0000000000111e80 called from o.s.
taken to (standard) corrective action, execution continuing.
--------------------------------------------------------------------------
[mpi::mpi-api::mpi-abort]
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 126.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(orte_errmgr_base_error_abort+0x84) [0xffffffff008df684]
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(ompi_mpi_abort+0x51c) [0xffffffff0068389c]
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(MPI_Abort+0x6c) [0xffffffff0069b3ac]
[i42-036:18488] /opt/FJSVtclang/GM-1.2.0-18/lib64/libtrtmet_c.so.1(MPI_Abort+0x2c) [0xffffffff00159bf0]
[i42-036:18488] ./nest [0x992cac]
[i42-036:18488] ./nest [0x11dd04]
[i42-036:18488] ./nest(main+0x38) [0x111eb8]
[i42-036:18488] /lib64/libc.so.6(__libc_start_main+0x194) [0xffffffff0323381c]
[i42-036:18488] ./nest [0x111d2c]
[ERR.] PLE 0019 plexec One of MPI processes was aborted.(rank=0)(nid=0x210a0034)(CODE=1938,793745140674134016,32256)
Below is my submission script
#!/bin/sh
#PJM -S
#PJM --rsc-list "elapse=10:00"
#PJM --rsc-list "rscgrp=micro"
#PJM --rsc-list "node=12"
#PJM --mpi "assign-online-node"
. /home/system/Env_base
export PARALLEL=1
export OMP_RUN_THREADS=1
export FLIB_FASTOMP=false
mpiexec -np 1 ./nest conf.cli run_benchmark_458.sli
Wonder if other people seen similar error on other supercomputers?
I have prepared a Debian packaging for NEST v2.8 targeting Debian proper (PR will come shortly). I have sorted out most things already. However, I am facing an issue when running the test suite for a NEST installation in /usr. This culprit is this:
Running test 'unittests/test_round_validate.sli'...
> Running mpirun -np 1 /usr/bin/nest /usr/share/doc/nest/unittests/test_round_valid
ate.sli
> NEST v2.8.0 (C) 2004 The NEST Initiative
>
> Sep 30 17:34:23 file [Error]: FileOpenError
> Could not open the following file for writing:
> "/usr/share/doc/nest/help/sli/array.hlp".
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 126.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 3952 on
> node meiner exiting improperly. There are two reasons this could occur:
>
> 1. this process did not call "init" before exiting, but others in
> the job did. This can cause a job to hang indefinitely while it waits
> for all processes to call "init". By rule, if one process calls "init",
> then ALL processes must call "init" prior to termination.
>
> 2. this process called "init", but exited without calling "finalize".
> By rule, all processes that call "init" MUST call "finalize" prior to
> exiting or it will be considered an "abnormal termination"
>
> This may have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
-> 126 (Failed: error in test script)
Is there a way to make this work -- maybe by writing into a temp dir? If not, I would exclude this test -- it is the only one that fails.
It would be nice to be able to run (a subset of) the tests without having to install NEST somewhere, but I could not figure out how -- it seems to heavily rely on its installation prefix.
Other open issues where I would appreciate you input are:
Thanks!
Sidenote: It would be nice, if the HEAD of master could be progressed to include the v2.8.0 tag.
The current travis settings are noisy. By deleting the notification section, the default scheme is used, which is that messages only go to the submitter. Given the pull system, there's no need for everyone to get every build error, as in the SVN/Jenkins setup.
Once #28 is fixed, user-defined modules based on MyModule
should work on all architectures, including BlueGene. At the same time, it is clear that our way of building such modules is not the most convenient one and breaks with some good practices for such modules. We should therefore revise the build mechanism. This is a follow-up to trac.526.
Note that the philosophy of trac.526 was to move entirely to dynamically loaded modules to be loaded by Install
. At least for BlueGene, this not feasible/advisable, there we need statically linked libraries. But the build process in that case might still be improved, e.g., by making the main NEST build process build also user-defined modules when they are linked as static libraries.
For script compatibility, we should make Install
a no-op in case the pertaining module has been linked in.
The C++ code style guidelines suggest running three tools (clang-format, vera++, cppcheck) to verify code formatting. The build.sh for Travis script can run these automatically (although there seems to be some trouble).
I would very much appreciate a small script that I could use myself locally to run all code style checks in one go, something like check_code_style.sh
.
@tammoippen you are maybe the ideal candidate to implement this?
User modules (MyModule
) do not work on BlueGene at present. Loading them as dynamically linked modules fails as well as building NEST with MyModule
linked in. This is most likely due to the fact that the MyModule
build setup does not take BlueGene peculiarities into account.
the panels that should contain dot displays appear but seem to be empty, at least on my screen.
Several neuron models, especially pp_*
models, use set_multiplicity()
to convey that several spikes have been fired within a single time step. Since these models have proxies, their spikes are conveyed via Network::send_remote()
, which converts a SpikeEvent
with multiplicity n into n SpikeEvent
s of multiplicity 1. Thus, it does not matter that STDPConnection::send()
and other plastic synapses do not heed event multiplicity.
Now, it is not inconceivable that this behavior will change in the future, i.e., that event multiplicity is transmitted. Then, plasticity would not be handled correctly. We should therefore do the following:
assert(e.get_multiplicity() == 1)
to the send()
methods of all plastic synapse modelsNEST currently comes with two "binary neuron" models, ginzburg_neuron
and mcculloch_pitts_neuron
, both derived from class binary_neuron
. These neuron communicate by SpikeEvent
s that do not really represent spikes, but state transitions, and that abuse the event multiplicity entry to communicate which states they are transiting between. But since they use SpikeEvents
, they still can be connected to, and receive input from, arbitrary neuron models. To me, this seems to make little sense. Should this be prohibited, to avoid that users build networks that do not make sense?
One way to achieve this would be to define a new event type, e.g., BinarySignalEvent
, and let binary neurons only support that event? A problem with that would be that remote sending only supports SpikeEvent
(the binary neurons currently exploit the implementation of multiplicity transmission in a too(?) clever way to send necessary information via spikes; needs better documentation and a solid MPI-test!). Alternatively, one could modify the connection-handshaking methods (send_test_event
, handles_test_event
) to check for the type of the source/target model and throw an exception if it is not derived from binary_neuron
.
when running make distclean, distclean-recursive throws an error when trying to run distclean in the pynest directory:
Making distclean in pynest
make[1]: Entering directory '/home/jordan/opt/nest-simulator.build/pynest'
Makefile:630: ../nest/.deps/pynestkernel_la-neststartup.Plo: No such file or directory
make[1]: *** No rule to make target '../nest/.deps/pynestkernel_la-neststartup.Plo'. Stop.
make[1]: Leaving directory '/home/jordan/opt/nest-simulator.build/pynest'
make: *** [distclean-recursive] Error 1
am im assuming this is not the desired behaviour?
Function test_connect_helpers.test_synapse
is a helper function, it does not implement a test. We therefore need to change the name to something not containing test
, so that nosetests won't run it as a test.
This has lead to failing tests for some time (see eg Travis build 141.8), which were not reported due to #89.
We use Event
s in NEST both during connection creation and during transmission of signals via connections. Event
s are used differently in both cases, and different subsets of Event
public functions cater to the different uses. This is not properly documented at present.
As an example, e.get_sender().get_gid()
and e.get_sender_gid()
will yield inconsistent results, even though one would expect them to yield the same result. In particular, e.get_sender_gid()
will trigger an assertion (sender_gid_>0
fails) when called from handle_test_event()
.
The reason for this inconsistency is that Event objects are used differently during network construction and network simulation.
During network construction, we send a pointer to a node of the sender node type. This is not, generally, a pointer to the actual sender: for senders on remote MPI processes, this pointer is not available. Instead, we send a pointer to a proxy node. No GID is set on that proxy node (proxy nodes always have GID 0).
send_test_event()
now sets only the Node* sender_
field on the Event it sends (to this
), but not the sender_gid_
, since that is not consistently available (i.e., not for proxy neurons representing remote sources).
When delivering spikes, on the other hand, only the sender_gid_
field is set in an Event, but not the sender_
. This because a pointer to sender is not available for remote neurons and passing pointers to proxy neurons would not make sense at this point.
As a further twist, if the target is a recording device, connections will only be created from local sources (the connect mechanism takes care of this), whence the pointer returned by e.get_sender()
will actually be a pointer to the real sender object (not a proxy), and e.get_sender().get_gid()
will return the proper GID. e.get_sender_gid()
, on the other hand, should be used only during spike delivery, not during connection creation.
@tammoippen: in build.sh, we do
file_names=`git diff --name-only $TRAVIS_COMMIT_RANGE`
Inside my repo on some pushes, this seems to return incorrect/unreachable references. I haven't yet identified the cause, or a travis bug report on this, but it may be a problem with history rewrites on pushes that it may end up referencing old, non-existent commits.
nest.help('voltmeter')
yields
NESTError: NoHelpInformationError in help
When a file is removed in a commit, the build.sh still tries to run cppcheck, vera++ and clang-format on it, hence all builds will fail. The solution is to first check, whether the file in the changeset, which is under testing, is still present – so before build.sh:115 have something like:
if test -e "$f" ...
Observed when running the test of the current master HEAD.
I was trying to build nest on my Fedora 23 system - package it as an rpm in fact. It kept failing and I realised this was because the configure script clears the CFLAGS and CXXFLAGS variables. Is there a reason why this is done?
https://github.com/nest/nest-simulator/blob/master/configure.ac#L461
When Delta_T = 0, the membrane voltage should go to infinity as soon as "V_th" is reached (Scholarpedia). This is handled correctly in NEST 2.6.0, but in NEST 2.8.0 there is no spike until the voltage reaches "V_peak".
Example:
import matplotlib.pyplot as plt
import nest
neuron = nest.Create('aeif_cond_exp')
nest.SetStatus(neuron,
dict(Delta_T=0.0, I_e=1000.0, V_th=-50.0, V_peak=-45.0))
recorder = nest.Create('multimeter')
nest.SetStatus(recorder, {'record_from': ['V_m'],
'interval': nest.GetKernelStatus('resolution')})
nest.Connect(recorder, neuron)
nest.Simulate(100.0)
data = nest.GetStatus(recorder, 'events')[0]
t = data['times']
vm = data['V_m']
plt.plot(t, vm)
plt.xlabel("Time (ms)")
plt.ylabel("V_m (mV)")
plt.title(nest.version())
plt.ylim(-70, -45)
plt.savefig("aeif_conf_exp_{}.png".format(nest.version().replace(" ", "_")))
stdp_synapses onto pp_psc_delta neurons are apparently not plastic at the moment. I have made a simple example simulation script which is available here:
https://github.com/mdeger/nest-simulator/blob/master/testsuite/manualtests/test_pp_psc_delta_stdp.py
The result of the script is shown in the attached pictures. Both neurons spike shortly after each spike of a sequence of spikes from the same presynaptic neuron. However, only the iaf_psc_delta neuron changes its synaptic weight, pp_psc_delta's weight is unchanged.
The only difference of the two models that I found, with respect to STDP behavior, is that archiver_length of the neurons is different. Both are instances of ArchivingNode, and this is important for STDP. However, I do not know how this different archiver_length may occur. I speculate that the problem occurs when the STDP connection is instantiated, and somehow the pp_psc_delta neuron does not increment its connection counter or so, but the iaf_psc_delta does it correctly.
Any help on the issue is appreciated.
The deprecation warning about changes in the connection management contains a link to the old site:
http://nest-initiative.org/Connection_Management
When clicking the link the following message shows up:
"This is somewhat embarrassing, isn’t it? It seems we can’t find what you’re looking for. Perhaps searching can help."
This code will reproduce the warning:
import nest
nodes = nest.Create('iaf_psc_delta', 2)
nest.Connect(nodes[0:1], nodes[1:], 'all_to_all', model='static_synapse')
Output:
lib/python2.7/site-packages/nest/hl_api.py:84: UserWarning:
The argument 'model' is there for backward compatibility with the old Connect function
and will be removed in NEST 2.6. Please change the name of the keyword argument
from 'model' to 'syn_spec'. For details, see the documentation at:
http://nest-initiative.org/Connection_Management
The wrong url is in pynest/nest/hl_api.py (SHA1: ae79942), lines 80 and 1098.
I assume the URL should point to this page instead:
http://www.nest-simulator.org/connection_management/
Additionally the wording of the message in line 1096 should be changed to
"will be removed in a future version of NEST" like in line 78.
Test test_rdv_param_setting.sli
fails when NEST is compiled with g++ 5.1.0 or Apple clang 6.1.0 with -O2
.
The reason for this is unsafe integer-overflow detection in librandom::UniformIntRandomDev::set_status()
, which leads to some overflow cases not being detected.
@apeyser provided a pointer to a safe implementation at https://www.securecoding.cert.org/confluence/display/c/INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow
Since there is a failing test (provided one uses the compilers given above, currently not in the NEST CI system), I am not adding an additional regression test.
This issue is related to the pyNN issue 377 "PyNN exhausts NEST 2.6/2.8 synapse model storage" (NeuralEnsemble/PyNN#377).
Examples are the brunel network scripts.
[This is a summary of things discussed in trac.766]
Neurons have the property tau_minus_triplet, but this does not seem to be used, which may be confusing for users. In the ticket triage session on August 27, 2014, we came to the conclusion that triplet STDP from the developer module should be made available in the public NEST version.
All PyNEST tests using the compatibility versions of AssertGreater
fail with older versions of Python and SciPy (here Python 2.6.6, SciPy 0.7.2). The error messages look like this:
======================================================================
ERROR: testRPortDistribution (nest.tests.test_connect_all_patterns.TestAllToAll)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_all_to_all.py", line 88, in testRPortDistribution
self.assertGreater(p, self.pval, 'Chi2 test failed.')
TypeError: <lambda>() takes exactly 3 arguments (4 given)
======================================================================
ERROR: testRPortDistribution (nest.tests.test_connect_all_to_all.TestAllToAll)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_all_to_all.py", line 88, in testRPortDistribution
self.assertGreater(p, self.pval, 'Chi2 test failed.')
TypeError: <lambda>() takes exactly 3 arguments (4 given)
Currently, the log files of Travis contain different types of information in one large block of text:
Once the log files are available on S3 (#98), the logs shown by TravisCI could be shortened to only contain summaries of the information detailed above and a link to the full logs. They could also point out more prominently, what the exact cause for the failure was.
@lekshmideepu @tammoippen: Could you please look into this?
Hi,
I'm going through this tutorial in an attempt to get a custom module running (I've already installed PyNEST successfully):
https://nest.github.io/nest-simulator/synapse_models
Everything works up to the point when I run the command
../MyModule/configure --with-nest=/usr/local/bin/nest-config
Then it stops due to an error, see here (I replaced some long file structure with /path/to):
=== configuring in libltdl (/Users/haffi/Documents/path/to/nest/mmb/libltdl)
configure: running /bin/sh ../../MyModule/libltdl/configure --disable-option-checking '--prefix=/usr/local/Cellar/nest/2.6.0' '--with-nest=/usr/local/bin/nest-config' '--enable-ltdl-convenience' --cache-file=/dev/null --srcdir=../../MyModule/libltdl
configure: error: cannot find install-sh, install.sh, or shtool in ../../../../../path/to/MyModule "../../MyModule/libltdl"/../../../../../path/to/MyModule
configure: error: ../../MyModule/libltdl/configure failed for libltdl
I'm running this on Mac OS X and I know of at least one other user which got the same error message. I've been trying to understand what the issue is but I can't solve it. If anyone here could help me that would be very helpful (and lead to updating of that tutorial page if this is a common problem).
Edit: I also tried downloading the ubuntu image and running it in virtual box. There I managed to run the configure script above but the make command crashes.
As reported on NEST_USER, the "simple simulation" example on http://www.nest-simulator.org/neural_simulations/ does not work, because the interval
property of the voltmeter
is set after the voltmeter has been connected to a neuron.
We need to reverse the order of operations and explain the need to set the interval first. We also need to check the remainder of the example.
Travis will report success on builds even if the testsuite (make installcheck
) fails. An example is Travis build 141.8 with one failing PyNEST test; see also #88.
The underlying problem is that make installcheck
returns exit code 0 even when tests fail.
In the index.md there is a dead link to Overview of scheduling and update strategies.
I assume this will include a overview of the internal simulation loop, i.e. when which node/connection functions are called by the simulator. Comparable slides were shown at the Nest User Workshop, and I guess these would be a great help for people to start in development.
One idea would be to make available the pdf of the slides under this link instead, while the page is not yet finished or published.
Events, in particular SpikeEvent
s can include a multiplicity: one event object represents several spikes emitted by a single sender in a single time step. The handle()
method of the receiving neuron must read out this information and apply it, typically by multiplying the weight with the multiplicity. I think this only makes sense for SpikeEvent
s, for all other event types this should lead to an error.
We currently have no test checking that all models heed multiplicity information. Add a test for this.
If the noise generator is connected to a neuron, the relation between its standard deviation 'std' and the actual fluctuations seen in the membrane potential of the neuron is not clear from the documentation.
Working on your master leads to really nasty merge histories. You'll work, then merge into our master, and then continue working without pulling up to upstream, then remerging. You should keep your master pristine, work on a branch, and then do a pull request on the branch. After that, you should delete the branch and make a new branch from master.
This is something to check for in a code review --- that the history isn't crazy. If so, the author should be requested to rebase, otherwise eventually it'll become very hard to identify were bugs occurred and merges will become increasingly difficult to do correctly.
All PyNEST tests using the function scipy.stats.kstest
fail with older versions of Python and SciPy (here Python 2.6.6, SciPy 0.7.2). The error messages look like this:
======================================================================
ERROR: testExponentialClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 130, in testExponentialClippedDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 444, in check_ks
D, p = scipy.stats.kstest(M, get_clipped_cdf(params), alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testExponentialDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 119, in testExponentialDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testGammaClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 152, in testGammaClippedDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 444, in check_ks
D, p = scipy.stats.kstest(M, get_clipped_cdf(params), alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testGammaDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 140, in testGammaDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testLognormalClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 174, in testLognormalClippedDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 444, in check_ks
D, p = scipy.stats.kstest(M, get_clipped_cdf(params), alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testLognormalDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 162, in testLognormalDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testNormalClippedDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 76, in testNormalClippedDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testNormalDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 64, in testNormalDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable
======================================================================
ERROR: testUniformDist (nest.tests.test_connect_distributions.TestDists)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_distributions.py", line 206, in testUniformDist
is_dist = hf.check_ks(self.pop1, self.pop2, self.label, self.pval, syn_params[self.label])
File "/users/eppler/10kcollaps.build/install/lib64/python2.6/site-packages/nest/tests/test_connect_helpers.py", line 468, in check_ks
D, p = scipy.stats.kstest(M, distrib.cdf, args=args, alternative='two-sided')
TypeError: 'NoneType' object is not iterable
When using NEST v2.6.0 - together with Music- to run one of our simulations, on some occasion we get this error, and the simulation crashes:
python: ../nestkernel/scheduler.h:789: static nest::delay nest::Scheduler::get_modulo(nest::delay): Assertion `static_cast::size_type>(d) < moduli_.size()' failed.
With version 2.4.2 there are no problems.
After digging into the problem, and testing different cases, it seems that it only happens when running simulations with a large number of connections and different parameters for each of them (e.g. delay and weight change from one to another). And even in this case, it's not consistent.
It could be worth to check if it's related to the new rounding strategy introduced in 2.6, because in one of my tests, filtering the synapses with a delay below a certain threshold (i.e. 0.1) seemed to fix the issue.
The current nest code produces compiler errors on the K computer. There are simple workarounds for all of them, nevertheless it would be nice to add a configure option similar to "--enable-bluegene" or any other fix.
../nest-simulator/configure --prefix=[...]/nest-simulator.install --with-openmp=-Kopenmp --with-mpi --with-gsl=[...]/gsl-1.15.install --without-python --without-readline --without-pthread CC=mpifccpx CXX=mpiFCCpx --host=sparc64-unknown-linux-gnu --build=x86_64-unknown-linux-gnu CFLAGS="-Nnoline -DUSE_PMA -DIS_K" CXXFLAGS="--alternative_tokens -O3 -Kfast,openmp, -Nnoline, -Nquickdbg -NRtrap -DUSE_PMA -DIS_K"
in librandom/clipped_randomdev.h:
"../../nest-simulator/librandom/clipped_randomdev.h", line 337: error: class member designated by a using-declaration must be visible in a direct base class
using RandomDev::operator();
in lines 97,204,205,337,446,447
simple workaround on K: "//" this lines
in nestkernel/nest.h
"../../nest-simulator/nestkernel/nest.h", line 84: error: identifier "LONG_LONG_MAX" is undefined
const tic_t tic_t_max = LONG_LONG_MAX;
in lines 84,85 - reason: LONG_LONG_MAX is unknown
simple workaround on K: replace LONG_LONG_MAX with LONG_MAX
in nestkernel/connector_model_impl.h
"../../nest-simulator/nestkernel/connector_model_impl.h", line 277: error: expected an identifier
if ( !std::isnan( delay ) )
in several lines - reason: std::isnan is unknown
simple workaround on K: #include cmath and remove "std::" from lines
Currently communicators are internal structures. It would be helpful to pull out the communicator, particularly for the case of MUSIC which uses a sub-communicator as part of putting as much of the machinery outside of NEST rather than buried in internal structures.
So either by integrating pymusic:
s = music.Setup()
s.getcomm()
or by adding a nest call:
nest.getcomm()
Ok, I have this file in the tree, and it gets updated to:
statusdict /rcsinfo (no_rcsinfo_available) put
without which, tests fail. Is this thing still necessary?
The following script:
import nest
cell1 = nest.Create('iaf_neuron')
cell2 = nest.Create('iaf_psc_exp_ps')
conn_dict = {"rule": "all_to_all"}
syn_dict = {"model": "stdp_synapse"}
nest.Connect(cell1, cell2, conn_dict, syn_dict)
gives
Traceback (most recent call last):
File "tmp.py", line 8, in <module>
nest.Connect(cell1, cell2, conn_dict, syn_dict)
File "....lib/python2.7/site-packages/nest/hl_api.py", line 153, in stack_checker_func
return f(*args, **kwargs)
File "..../lib/python2.7/site-packages/nest/hl_api.py", line 1123, in Connect
sr('Connect')
File "....lib/python2.7/site-packages/nest/__init__.py", line 81, in catching_sli_run
raise hl_api.NESTError("{0} in {1}{2}".format(errorname, commandname, message))
pynestkernel.NESTError: IllegalConnection in Connect_g_g_D_D: Creation of connection is not possible.
If I replace 'iaf_psc_exp_ps
with 'iaf_psc_exp
, it works fine.
nest.GetDefaults('iaf_psc_exp_ps')
shows no tau_minus
parameter, and the same is true for some other precise models I looked at.
Is there a reason why iaf_psc_exp_ps
cannot work with stdp_synapse
? If so, please could this be documented (unless I missed the documentation). If not, please could this be implemented?
For all I can see, it is currently not possible to detect whether a neuron model supports precise spike times, short of reading the documentation. This is unfortunate for the user, and makes automated testing problematic if that requires adaptation for models supporting precise times.
Add a flag to the status dictionary, returning the value provided by Node::is_off_grid()
.
100 is way, way too long. 80 should be the absolute maximum length of a line, and I'd argue for 70.
I know that some editors will tend to drive you towards long lines (Xcode) with line wrapping, but on that point, objectively, the lines should be single concepts, single tests, and the wrapping should be fixed by the programmer. A line like:
if ( ( new_nmin < 0 && new_nmax > max + new_nmin ) || ( new_nmax - new_nmin == max ) )
is way, way too long.
Look at the default line lengths for TeX --- thin columns, and that's for literature.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.