project-rig / pynn_spinnaker Goto Github PK
View Code? Open in Web Editor NEWRig backend for PyNN 0.8
Rig backend for PyNN 0.8
Aside from plasticity, all that is really required for feature parity with SpYnNaKeR:
This is something to do with buffer offsets
Projection
can be generated on-chip - annotate Connector
SynapticMatrix.partition_on_chip_matrix
to build SubMatrix
structures based on estimated upper bound on row-lengthNativeRNG
for pynn_spinnaker - next
should PROBABLY throwNativeRNG
Connector
parametersConnectionBuilder
region to write out parameters etcAllToAll
and FixedProbability
connectors; and Uniform
and Constant
parameters for static synaptic matricesOneToOneConnector
SynapticMatrix
regionPostEventHistory
for PostEventHistory<void, NumEntries>
This could be done using lazy_param_map
functions similar to the refractory period for neurons
Using SpikeSourcePoisson
to provide input to all neurons in network is bad:
As PR neural populations already receive input via DMAd currents, adding current source populations to implement http://data.andrewdavison.info/docs/PyNN/reference/electrodes.html API would be relatively trivial.
Cell types should produce a shift of 1024 i.e 1, 0.5, 0.25, 0.125
The hypergeometric variate generator, used for the FixedTotalNumberConnector(with_replacement=False) connector, fails for large parameter values.
The hypergeometric generator generates a sample from the hypergeometric distribution with parameters ngood, nbad, and nsample. This is the distribution over the number of red balls we find when we sample nsample times without replacement from an urn containing ngood red balls and nbad white balls.
The generator works only if ngood, nbad, and nsample are less than 2^16.
In our case nbad is the size of the submatrix of the synaptic matrix we are dealing with, minus the number of synapses within the submatrix. This can take values at least as high as 1024^2.
Note that this sampler is only used for the FixedTotalNumberConnector with with_replacement=False. This is not a commonly used connector; it is currently unimplemented in PyNN, which ignores the with_replacement flag.
Because the actual rows can become so short, potentially max synaptic event rate might be a poor metric. SynapseCluster
could partition pre-synaptically based on CPU cycles, a more complex constant + N * synapse
model of row cost and estimates for number of extension rows based on delay distribution
IF_cond_exp
performance - 167 cycles per neuronIF_curr_ca2_adaptive
performanceIF_curr_dual_exp
performanceSynapseCluster
could partition pre-synaptically based on CPU cycles, a more complex constant + N * synapse
model of row cost and estimates for number of extension rows based on delay distributionRight hand side is different
There is not enough space to duplicate IF_curr_exp neuron and synapse parameters for 1024 neurons, solution is probably to store immutable parameters for unique parameter configurations and have num neurons long array of uint16_t
indices pointing to each neuron's immutable parameter set or neuron long array of pointers:
These should be converted back to milliseconds
Connectors are PyNN objects so it should be clear that these are internal additions
As discussed with @grey-area, negative weight or delay parameters can come about due to:
Proposal:
enum DMATag
{
DMATagSpikeDataRead,
DMATagMax,
};
For:
enum class DMATag
{
SpikeDataRead,
Max,
}
Possible also type to uint32_t to ensure good ARM code results
http://blog.smartbear.com/c-plus-plus/closer-to-perfection-get-to-know-c11-scoped-and-based-enum-types/
They kinda should as spike sources are quite big
Setting parameters within a view definitely doesn't work
It would be very useful if the connection builder had the ability to return stats e.g. number of overflowing rows
These are hooked to the connection building so look crap - at minimum unhook!
Fixed probability connector should have this flag
max_spike_blocks
should be number of non-empty bins in sum of bincounts for each spiketrain in vertex. In constructor, spike times could be quantized so they can be used both in sizeof
and write_subregion_to_file
This means network tester can be used without special code
This is starting to break stuff
Will also need to check matrix doesn't exceed allocated size
Back propagation of spikes via router for learning seems to result in a lot of dropped packets
e.g. RandomDistribution("normal_clipped", [1.5, 0.75, 0.1, np.inf])
- Which is pretty typical usage
This requires passing through of vertex slices etc
Synapse constraint clamped down by population size (1000) to 1000 therefore it is no longer rounded to 4 * self.neuron_j_constraint
Simulator._constrain_cluster
could return a chip count estimate. Perhaps algorithm could be something like:
# List of spare cores in ascending order
spare_cores = []
num_chips = 0
...
# Where constrains are built
num_cluster_cores = 1 + len(n.input_verts)
# Loop through list of chips with spare cores
# **TODO** could we use bisect here
spare_cores_found = False
for i in len(spare_cores):
# If there are enough space on this chip
if spare_cores[i] >= num_cluster_cores:
spare_cores[i] -= num_cluster_cores
spare_cores_found = True
break
# **TODO** mechanism to remove zeroes entries from spare_cores list
# If no chip was found with sufficient resources to contain this cluster add a chip and
# a new spare_cores entry containing the cores that remain on it after adding this cluster
if not spare_cores_found:
num_chips += 1
spare_cores.append(16 - num_cluster_cores)
Figure out where time is being spent in Python-side of P.R. attempt to optimise.
This is generally not an issue, but when neuron processor runs out of CPU, time interrupts get skipped and data doesn't get written leading to crap being returned.
Fast and slow sources seperation MAY be optimal but:
lazy_param_map.apply_indices
crud could be removedregions.SpikeSourcePoisson
could be removedDMAs should be started before rows are processed so they can occur when CPU is busy
However it's a pain - A+ and A- are parameters of SpikePairRule
but need to be scaled based on the weight dependence
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.