Comments (10)
Also note that the input histograms are drastically different.
from hnn-core.
sorry I didn't look at the zip file. Might be better to share via gist, so I can copy paste. You can have the param file as a dictionary in the Python script to avoid loading it.
could it be that sync_evinput
is ignored somewhere? The input histograms being different is certainly weird.
from hnn-core.
@jasmainak try this.
could it be that
sync_evinput
is ignored somewhere? The input histograms being different is certainly weird.
Possibly, though I don't understand why some trials would still have a small amount of jitter s.t. entire clumps of trials are still exactly the same. Note that sync_evinput = True
for this parameter set, however, the problem persist when sync_evinput = False
.
from hnn-core.
I also tried running the same simulation with 25 trials using the JoblibBackend
and got the following image. @blakecaldwell isn't the JoblibBackend
supposed to provide jitter across trials?
from hnn-core.
it could be that using seeds to jitter the trial may not be working in this case for whatever reason. Will need to investigate. Should we set up another pair programming session @rythorpe ?
By the way, I do think we should move away from "let's compare to HNN as ground truth" philosophy at some point. Because HNN does not have any unit tests either, so there isn't really a strong notion of correctness in that sense. What we have in HNN is outputs which make sense for some scenarios.
I have seen software where 2 bugs cancel each other and the output makes sense for some inputs but not for others. We need a stronger definition of what's a bug, otherwise it will hamper development of the software.
from hnn-core.
it could be that using seeds to jitter the trial may not be working in this case for whatever reason. Will need to investigate. Should we set up another pair programming session @rythorpe ?
@jasmainak sure, let's give it a shot!
By the way, I do think we should move away from "let's compare to HNN as ground truth" philosophy at some point. Because HNN does not have any unit tests either, so there isn't really a strong notion of correctness in that sense. What we have in HNN is outputs which make sense for some scenarios.
I have seen software where 2 bugs cancel each other and the output makes sense for some inputs but not for others. We need a stronger definition of what's a bug, otherwise it will hamper development of the software.
We can discuss this more in-person, but as a contributor, I completely agree that HNN shouldn't be used as ground truth. As a user with a previously-created, HNN-generated param file, I just want to identify what is wrong with my param file so that I can run similar (i.e., not necessarily identical) simulations in hnn-core.
I think this issue represents a case where a (potential) bug has emerged that prevents hnn-core from accomplishing a basic use-case that HNN currently implements correctly (i.e. providing uniform stochasticity across simulation trials). Most likely, the discrepancy outlined in this issue is not a bug in hnn-core but rather a difference in how HNN and hnn-core reference some elusive parameter. However, if a bug does indeed exist that has eluded us despite hnn-core's tests, the functionality of HNN can provide us with a useful sanity check.
from hnn-core.
By the way, I do think we should move away from "let's compare to HNN as ground truth" philosophy at some point. Because HNN does not have any unit tests either, so there isn't really a strong notion of correctness in that sense. What we have in HNN is outputs which make sense for some scenarios.
At some point, hnn-core
will be considered "ground truth". More accurately, the "current state of the model", It is important to systematically understand where differences are coming from, whether a bug in new code or a fix in new code. Both have been found in developing hnn-core
.
from hnn-core.
@rythorpe hnn-core
master
in my experience has never provided variable results for different trials, no matter the backend. There is a bug in hnn-core
. I forgot exactly where, maybe the seeds for different trials.
If we can reproduce HNN seeding in the first pull request, then we can at least run sanity checks for new code. We can then have another issue/PR for more logical seeding of random variables in each trial.
from hnn-core.
@rythorpe hnn-core master in my experience has never provided variable results for different trials, no matter the backend. There is a bug in hnn-core. I forgot exactly where, maybe the seeds for different trials.
@blakecaldwell but try this:
$ git checkout 015124c36ba0632391a2532a1166c201e1153841
$ python examples/plot_simulate_evoked.py
it did have randomness before ...
from hnn-core.
@jasmainak, indeed, but there is still a problem with the seeds where more than one trial use the same seed (and have identical results). Described in more detail in #113
from hnn-core.
Related Issues (20)
- BUG: MPI simulations break for `dt` of certain size HOT 1
- Upload data error - HNN gui HOT 2
- Clean up optimization API
- Improve error messages for adding drives
- rename simulate_dipole -> simulate HOT 3
- Proposed Enhancements for the Current GUI: A Refined Feature List HOT 1
- change name from calcium model to Kohl_2022 HOT 4
- [JOSS Review] HNN-core: A Python software for cellular and circuit-level interpretation of human MEG/EEG HOT 2
- [JOSS] Documentation HOT 1
- Optimization example error HOT 4
- Problems using non-'soma' values for record_isec argument in simulate_dipole() HOT 12
- [JOSS] Software Paper HOT 2
- pre-allocate arrays for storing continuous simulation data in network_builder.py
- issue with GUI install
- GUI does not show dipole plots HOT 2
- installation on m2 mac HOT 1
- [BUG] `plot_dipole` not showing in GUI with `matplotlib>=3.8.0` HOT 9
- tests: add axis data checks for all plots available in GUI HOT 2
- BUG: Deleting drives in GUI after file upload prevents loading of the same file HOT 1
- GUI callbacks error messages are not logged
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hnn-core.