Comments (14)
I tried joblibs and got the "could not pickle" error. Pretty sure it's not going to work unless using another process as in pull #48 (use MPI to spawn another process). Multiprocessing should work in theory, but I need to play around with the different backends more (spawn vs. fork vs. forkserver). Fork should be avoided for MPI compatibility (needed for running on a cluster). I also think that was the option that caused intermittent lockups due to GIL (I didn't investigate it very thoroughly).
from hnn-core.
Another parallelization point is running trials in parallel.
8b6467a#diff-cbfdde8e3e8b0f3ad7c8ab1ffce9a9c7R43
Yes, it's too bad that NEURON can't be pickled. MPI is the heavyweight solution for the problem, but necessary for large job placement. We should look into netpyne's handling of batch jobs and see if it can be adapted to trials. Do they use NEURON bulletin board?
from hnn-core.
Have you looked at https://github.com/jasmainak/mne-neuron/pull/44/files btw? It was WIP and it kind of works. My solution to the pickling error was to pass the param dictionary because it's easy to pickle, recreate the network with different seeds and then simulate the dipole. This should be equivalent (?) but probably not exactly because the way I change the random seeds is slightly different from the way it was done historically.
I'm going to look at your bigger PR later tomorrow or Sunday to get a sense of what you have done.
What is the NEURON bulletin board? I'm definitely not a parallel processing expert :-)
from hnn-core.
Oh, interesting. I didn't know that the pickling error was gone, but that makes sense to recreate the network after parallelization. I see how this relates to #47 . Unfortunately, with the scheme in #44 results will not be comparable to original HNN. In 8b6467a I brought in code from original HNN that implemented multiple trials with the same vectors for recording dipole. The time offset in the vector is adjusted for each trial. There's probably history to that design choice, but it's strange code, and it introduces a data dependency from one trial to the next. Let's deal with this once it is no longer necessary to reproduce HNN results?
I haven't used the bulletin board scheme, but the core difference is pc.submit()
https://www.neuron.yale.edu/neuron/static/py_doc/modelspec/programmatic/network/parcon.html#ParallelContext.submit
An old commit that added NEURON BB use to NetPyNE:
suny-downstate-medical-center/netpyne@1a7e195
I also intend to understand the usefulness of pc.subworlds():
https://www.neuron.yale.edu/neuron/static/py_doc/modelspec/programmatic/network/parcon.html#ParallelContext.subworlds
from hnn-core.
closed by #44
from hnn-core.
@jasmainak I feel that this needs to be reopened. The HNN GUI needs parallel context. As a user tunes parameters, the time they have to wait for a single trial to complete = pain. The only way to decrease the runtime of a single trial is through parallel context. In fact, running 4 trials in order with each trial using 4 cores is faster than running 4 trials simultaneously, each on one core.
Also I want to open the discussion again about how Network is instantiated multiple times. Once in the example scripts, and then again in _clone_and_simulate(). Can you reiterate why this is desirable?
It results in the following if I try using parallel context:
NEURON: gid=0 already exists on this process as an output port
near line 0
dp_total_L5 = 0.
^
ParallelContext[1].set_gid2node(0, 0)
oc_restore_code tobj_count=1 should be 0
from hnn-core.
@blakecaldwell could you share some benchmarks how much faster is it to run the 4 trials in order with each trial using 4 cores vs 4 trials simultaneously on 4 cores? I'm a bit wary of adding nested parallelism since hnn-core is not mature software and we're not ready for this. However I think it's fine to have two different modes of parallelism -- "parallel-trial" and "parallel-neuron" or something of that sort
Regarding instantiating the network multiple times, the reason is because the Network object in hnn-core is a bit of a mess and cannot be pickled (all arguments to function being parallelized must be picklable for joblibs to work). Thus the workaround is to pass the params dictionary object. It's a bit of a hack, so we should fix this indeed
from hnn-core.
@jasmainak I don't have benchmarks saved. Would need to run them again on an old version of mne-neuron that works with ParallelContext. Ran them once to see if it was a direction worth pursuing. It wasn't, so I just stuck with the status quo, using ParallelContext. Honestly, I'm not in favor of using joblibs because of the pickling issue and there not being a performance benefit, albeit I've only given anecdotal evidence. So I would be happy with two modes.
We could go back to only passing the params dict to simulate_dipole? I like the idea of abstracting everything NEURON-related in code that calls hnn-core. I can't think of what a user would want to do with a Network object (as bought by having it as a separate step). I recall that your original reasoning was for a cleaner API. From #40
The way I see it, the network object specifies how cells are connected, so the user should have access to it and be able to add layers of neurons and specify connectivity structure.
Changing the network model is a pretty low-level change, not something HNN is intended for. I would recommend that users go to NetPyNE for this.
from hnn-core.
I agree that HNN is not intended for changing the network per se. These could be private classes/functions for instances.
But having a clean API will help you manipulate objects more easily and speed up development for the future. I agree that eventually I want to recommend users to go to NetPyne but unfortunately the code there needs cleaning up too and I'd rather start with a smaller problem with few files than a big mass of code ...
from hnn-core.
But having a clean API will help you manipulate objects more easily and speed up development for the future.
This is a truism of software engineering (I have a background in this too). Could you clarify this comment with a concrete example related to separating network instantiation? I don't see how it's cleaner.
from hnn-core.
To me, an API with "might be useful in the future" features is not cleaner. If a feature isn't going to be used, it should be deprecated, right? I thought this is what you were saying when I originally proposed to use hnn-core with a QT interface.
from hnn-core.
see my pull request here: #59
I want to be able to do simple things like this. It's not obvious to me how to do this in netpyne or hnn-core currently. There is already the cell model, so adding some extra flexibility will go a long away. And I don't mean to add extra features. Just the same features but in a more modular and clean way.
from hnn-core.
@jasmainak Is there anything here that you still want to pursue?
from hnn-core.
nopes, okay to close!
from hnn-core.
Related Issues (20)
- [JOSS] Software Paper HOT 2
- pre-allocate arrays for storing continuous simulation data in network_builder.py
- issue with GUI install
- GUI does not show dipole plots HOT 2
- installation on m2 mac HOT 1
- [BUG] `plot_dipole` not showing in GUI with `matplotlib>=3.8.0` HOT 9
- tests: add axis data checks for all plots available in GUI HOT 2
- BUG: Deleting drives in GUI after file upload prevents loading of the same file HOT 1
- GUI callbacks error messages are not logged
- GUI loading params from hdf5
- GUI export network to hdf5 button
- GUI RSME annotations on dipole plots
- GUI plot sets
- GUI cell parameters
- GUI new tutorial input files
- GUI plotting observed data on PSD plot
- GUI tonic inputs
- GUI synchronous Inputs HOT 3
- BUG (GUI): Dipole plot scaling HOT 13
- GUI exporting simulations to csv HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hnn-core.