Comments (9)
I have changed the to do list according to what we have just discussed
from brian2cuda.
TODO lists has slightly changed:
-
- Create all spikespaces in global memory (circular indexing).
-
- In case of NO occurrence of multiple synapses for same pre/post pair, use
thread <-> synapse
correspondence.
- In case of NO occurrence of multiple synapses for same pre/post pair, use
-
- In case of occurrence of multiple synapses for same pre/post pair, use
thread <-> postNeuron
correspondence (which needs precomputed connectivity matrix, sorted by postID per block and for each thread the synapse number to start applying synapcic affects for).
- In case of occurrence of multiple synapses for same pre/post pair, use
-
- Create precomputed boolean array indicating which mode (2./3.) to use per preNeuron.
from brian2cuda.
after discussion with @mstimberg we think that the points 3 and 4 can be considered as low prioarity as only very few models will make use of more than one synapse per pre/post neuron pair and furthermore such a scenario could also be modelled using two synapse objects.
3 and 4 are still nice to have (and the issue should keep open until this is resolved, too) but for sure to be done after other moire crucial things...
from brian2cuda.
When implementing this, check that
brian2.tests.test_synapses.test_transmission_scalar_delay
passes. Currently it doesn't, since there is no no_or_const_delay_mode
implementation and when using hard coded no_or_const_delay_mode = False
, the delay value set with Synapses(..., delay=...)
uses only a scalar value which isn't correctly used by the no_or_const_delay_mode = False
implementation, which expects an array of size = len(NeuronGroup)
. This should be automatically fixed once this issue is solved. Just putting it here to not forget checking it.
from brian2cuda.
In Konrads "leftover code", the number of blocks in no_or_const_delay/_mode
is set to 1
. I don't see why that is the case. So just to make sure I got this right, could you confirm my understanding here @moritzaugustin please?
Assuming we do not have occurrences of multiple synapses per pre/post pair (case 2 in my TODO list):
We are creating multiple eventspaces
(# = delay) and each timestep the thresholder fills the eventspace
that is furthest away (in time / circular indexing). And then when applying the effects, we just loop through the current spikespace (the closest in time/ circular indexing) and then apply the effects to all postsynaptic neurons the same way we do in the "normal" propagation mode: Each block takes care of a certain range of post neurons and (since we have only one pre->post connection per pre/post pair) each thread applies the effects to one post neuron of the current pre neuron.
So we still use the same block post neuron structure as we did before, right?
But then this block structure makes only sense if we have a lot of synapses per pre neuron. In fact, when using one block per multiprocessor, we need at least num_blocks * max_num_threads = 15*1024 = 15360
(on our GPU) synapses per pre neuron to even utilize all threads we can have per block. That is the order of connections in the human cortex, so many cases will have much less connectivity I persume. Wouldn't it make sense to instead parallelize over neurons per block, meaning each block takes one neuron from the spikespace and then the threads apply the affects. This of course introduces the problem of multiple pre neurons possibly having synaptic connections to the same post neuron. But since we are applying affects for a few (=num_blocks) neurons at a time and execution of blocks in not synchronized at all, we could just use atomicAdd
instructions when updating the neuron variables. I would imaging that the occurrence of two threads in two different blocks writing to the same post neuron variable will be insignificantly low. We probably could even start all pre neurons at the same time in different blocks and due to unsynchronized block execution have not too many interfering atomics. What do you think, did I miss something?
from brian2cuda.
multiple synapses for same pre/post pair: we had discussed to label this as optional/maybe postpone after all other propagation stuff (incl. bundles works). lets discuss this afterwards
but of course having only one pre-post pair has to be checked when creation of synapses => otherwise serialization and warning
from brian2cuda.
Yeah, we can discuss this afterwards, I'll just add some more thoughts here to not forget :)
No serializing needed for no_or_const_delay_mode
when presynaptic variables are modified
In no_or_const_delay_mode
we don't need to serialize when the on_pre
pathway modifies presynaptic neuron variables as long as we loop through the spikespace (as we are currently doing), since we only apply synaptic effects for a single spiking neuron at a time. Currently this would get us in our 'pre'
serializing mode (only a single thread does everything). So whenever we get into our 'pre'
mode, we only need to modify a single presynaptic neurons state variables, so all we need to do, is to make sure that only a single thread does that. E.g. adding something like
if (tid==0 && bid==0)
// presynaptic code
// postsynaptic code (one thread one postsynaptic neuron)
This would mean that in no_or_const_delay_mode
we could - while applying postsynaptic effects in parallel anyways - also modify presynaptic ones on the fly (and synaptic variables also of course). Assuming only one synapse per pre/post neuron.
EDIT: In the 'pre'
mode (now called 'source'
mode) the presynaptic code has to be applied once per synapse. Therefore in aboves snippet thread 0 of block 0 would loop through all synapses and each time apply the same effect to the same source neuron. But maybe we could simplify this by just multiplying the effect times the number of synapses. We would have to catch different effect cases though: additive effects (instead of adding N
times +v
, do once +N*v
); multiplicative effects (instead of N
times *M
, do once *(M^N)
); setting a variable to a constant (which we would only need to apply once).
EDIT 2: We can only aggregate the effect if it does not depend on a postsynaptic variable. If it does, we are back to serializing. But maybe instead of having one thread load postsynaptic variables serialized, we could have multiple threads do coalesced memory reads and then let just one thread at the time apply the presynaptic effect, e.g. (needs profiling)
for (i=0; i<num_synapses; i++)
if (tid == i)
// apply presynaptic effect
Serializing mode is set wrong for 'on_post' pathway
Is there any difference between an on_post
pathway and an on_pre
pathway? I think not, the blocked connectivity structure is just saved the other way around (blocks of pre neurons for single post neurons for on_post
pathways) and that's it. Currently we just check for _postsynaptic_idx
or _presynaptic_idx
in variable_indices
and set the serializing mode depending on either occurrence and independent of the pathway. But instead a _postsynaptic_idx
in a on_post
pathway should be equivalent to a _presynaptic_idx
in a on_pre
pathway etc.
STDP without delays could be as efficient as normal pre spike -> post effect models
So if we implement both points above, STDP without delays should not make any performance difference to normal pre spike -> postsynaptic effect
models - even when the STDP model modifies pre, post and synaptic variables all together in both on_pre
and on_post
pathways.
Well, it got a little late and this needs a little rethink tomorrow...
from brian2cuda.
Summary of what has still to be done here:
-
my TODO list point iii. (see also #30)
-
my TODO list point iv.
-
my comment under No serializing needed for no_or_const_delay_mode when presynaptic variables are modified
from brian2cuda.
I opened issues for the remaining TODOs here:
-
#93 deals with iii. and iv. in my TODO list (multiple pre/post connections in
target
mode with homogeneous delays) -
I added the content of my comment under No serializing needed for no_or_const_delay_mode when presynaptic variables are modified to #34, which covers
source
mode optimization.
Closing this one, as the standard no_or_const_delay_mode
implementation is done.
from brian2cuda.
Related Issues (20)
- Call reset kernel only with as many threads as there are spiking neurons (not as there are neurons in total)
- Refactor benchmarking scripts and update generated plots
- Check if storing the size of synapse groups is necessary? HOT 1
- Needs patch to run with Brian 2.4.2 HOT 2
- Optimize `StateMonitor`
- Impelement brian2cuda preference file support
- Copy all eventspace counters to host efficiently at each time step
- Investigate and document performance effects when working with `Subgroup`s HOT 1
- Consider partitioning eventspaces when using `Subgroup`s HOT 4
- Optimize `PopulationRateMonitor`
- Fix `SpikeMonitor` for `Subgroup`s HOT 1
- Optimize our `SpikeMonitor` for `Subgroups`
- Refactor test suite scripts
- Fix `ReferencError` in spatialneuron tests HOT 3
- Fix memory leak when having multiple `run` calls
- Spikes are lost when changing delays between `run` calls HOT 3
- Recent Brian2 update PR broke benchmark scripts HOT 1
- Brian2Cuda Uninstalls Brian2 2.5.1 and Installs 2.4.2 Which Won't Import HOT 3
- Brian2Hears and Brian2CUDA HOT 3
- Optimise brian2CUDA HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from brian2cuda.