Git Product home page Git Product logo

spikeinterface's Introduction

SpikeInterface: a unified framework for spike sorting

Latest Release latest release
Documentation latest documentation
License license
Build Status CI build status
Codecov codecov

Twitter Mastodon

⚠️⚠️⚠️ New features under construction! 🚧🚧🚧: after the 0.100.0 release (and related bug fixes), the next release will contain a major API improvement: the SortingAnalyzer. To read more about this, checkout the enhancement proposal. Please refer to the stable documentation here

SpikeInterface is a Python framework designed to unify preexisting spike sorting technologies into a single code base.

Please Star the project to support us and Watch to always stay up-to-date!

With SpikeInterface, users can:

  • read/write many extracellular file formats.
  • pre-process extracellular recordings.
  • run many popular, semi-automatic spike sorters (also in Docker/Singularity containers).
  • post-process sorted datasets.
  • compare and benchmark spike sorting outputs.
  • compute quality metrics to validate and curate spike sorting outputs.
  • visualize recordings and spike sorting outputs in several ways (matplotlib, sortingview, jupyter, ephyviewer)
  • export a report and/or export to phy
  • offer a powerful Qt-based viewer in a separate package spikeinterface-gui
  • have powerful sorting components to build your own sorter.

Documentation

Detailed documentation of the latest PyPI release of SpikeInterface can be found here.

Detailed documentation of the development version of SpikeInterface can be found here.

Several tutorials to get started can be found in spiketutorials.

There are also some useful notebooks on our blog that cover advanced benchmarking and sorting components.

You can also have a look at the spikeinterface-gui.

How to install spikeinterface

You can install the latest version of spikeinterface version with pip:

pip install spikeinterface[full]

The [full] option installs all the extra dependencies for all the different sub-modules.

To install all interactive widget backends, you can use:

 pip install spikeinterface[full,widgets]

To get the latest updates, you can install spikeinterface from source:

git clone https://github.com/SpikeInterface/spikeinterface.git
cd spikeinterface
pip install -e .
cd ..

Citation

If you find SpikeInterface useful in your research, please cite:

@article{buccino2020spikeinterface,
  title={SpikeInterface, a unified framework for spike sorting},
  author={Buccino, Alessio Paolo and Hurwitz, Cole Lincoln and Garcia, Samuel and Magland, Jeremy and Siegle, Joshua H and Hurwitz, Roger and Hennig, Matthias H},
  journal={Elife},
  volume={9},
  pages={e61834},
  year={2020},
  publisher={eLife Sciences Publications Limited}
}

spikeinterface's People

Contributors

alejoe91 avatar bendichter avatar chrishalcrow avatar chyumin avatar codycbakerphd avatar colehurwitz avatar cwindolf avatar dradeaw avatar ferchaure avatar h-mayorquin avatar joeziminski avatar jsiegle avatar juleslebert avatar juliasprenger avatar khl02007 avatar luiztauffer avatar magland avatar maurotoro avatar mhhennig avatar oliche avatar pauladkisson avatar pre-commit-ci[bot] avatar rkim48 avatar saksham20 avatar samuelgarcia avatar shawn-guo-cn avatar tombugnon avatar tomdonoghue avatar yger avatar zm711 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spikeinterface's Issues

Saving/Loading combined recording/sorting data

So I am having a having a bit of a conceptual problem with trying to design a pipeline (I am quite new to electrophysiology). Essentially I want to load my file, sort the data, export it to a NEO compatible format, and import it to Elephant for further analysis. I have got up to the point of sorting the data, and now I am trying to firgure the best way to export it for further analysis.

However, it seems that for all file types, SpikeInterface writes Recording and Sorting data to different files... and I am wondering how this works for downstream analysis. I haven't worked with NEO/Elephant at all yet, but perhaps I can just load both files and merge them within Neo/Elephant?

For example, I want to use the NixIO or Kwik extractors to write Recording and Sorting data... but would it be possible to combine these data into a single data source?

Hope this makes sense.

Getting started tutorial use of mountainsort throws error

When I run this line:

sorting_MS4 = ss.run_mountainsort4(recording=recording_cmr, detect_threshold=6)

I get the following error:


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-13-6da5239fbc7f> in <module>
----> 1 sorting_MS4 = ss.run_mountainsort4(recording=recording_cmr, detect_threshold=6)

~/.conda/envs/mybase/envs/ms/lib/python3.6/site-packages/spikesorters/sorterlist.py in run_mountainsort4(*args, **kargs)
    108 
    109 def run_mountainsort4(*args, **kargs):
--> 110     return run_sorter('mountainsort4', *args, **kargs)
    111 
    112 

~/.conda/envs/mybase/envs/ms/lib/python3.6/site-packages/spikesorters/sorterlist.py in run_sorter(sorter_name_or_class, recording, output_folder, delete_output_folder, grouping_property, parallel, verbose, **params)
     51                          parallel=parallel, verbose=verbose, delete_output_folder=delete_output_folder)
     52     sorter.set_params(**params)
---> 53     sorter.run()
     54     sortingextractor = sorter.get_result()
     55 

~/.conda/envs/mybase/envs/ms/lib/python3.6/site-packages/spikesorters/basesorter.py in run(self)
    100         if not self.parallel:
    101             for i, recording in enumerate(self.recording_list):
--> 102                 self._run(recording, self.output_folders[i])
    103         else:
    104             # run in threads

~/.conda/envs/mybase/envs/ms/lib/python3.6/site-packages/spikesorters/mountainsort4/mountainsort4.py in _run(self, recording, output_folder)
    108             detect_interval=p['detect_interval'],
    109             num_workers=p['num_workers'],
--> 110             verbose=self.verbose
    111         )
    112 

TypeError: mountainsort4() got an unexpected keyword argument 'verbose'

Packages:
ml-ms4alg 0.2.3
spikecomparison 0.2.0
spikeextractors 0.7.0
spikeforestwidgets 0.1.4
spikeinterface 0.9.0
spikemetrics 0.1.2
spikesorters 0.2.0
spiketoolkit 0.5.0
spikewidgets 0.3.1

I installed spikeinterface and its related packages by doing git clone and pip install .

Temp resources clooging computer

Hi,
I am having issues with the computer being clogged with temp waveform files.
I have ~25 minutes recordings that I sorted. I am caching the recording and running a multi sorting comparison.
When reaching the export to phy of the agreement sorting (after curation based on snr, firing rate and is violation) I have around 300gb of temp files. Is there any way to overcome this problem?
Thanks,
Ben

installation of sorters

after following all installation instructions, only herding spikes is available in spikeinterface when running print available sorters.

tried in windows 7,10, with python 3.7 3.7.4, anaconda3, miniconda.

Examples are no longer notebooks

So we deleted all the examples that were originally notebooks. Are we ok with not having any notebook examples? Do you think it would make sense to have those as well?

Release

We should release all packages and spikeinterface soon :)

Port from spiketoolkit examples.

Some NB need also to be prt to spikeinterface doc:

  • curation_sorting_extractor_example.ipynb
  • quality_metrics_curation_example.ipynb
  • validation_example.ipynb

The second one depend on a dataset and so cannot be include for an auto build.
Unless the file would be directly downloadable and quite lightweigh.

What the plan.

I can start porting the first and third one.

@colehurwitz @alejoe91

autoRecordingExtractor a local kach db error examples/modules/widgets/plot_3_recsort_gallery.py

Hi all. I was wondering if you are aware of a potential bug on plot_unit_waveforms() and perhaps other fcns when loading from an autosorting extractor using a kachery pointer. I created my own kachery pointers to my recording and sorting, and that's when I run into this problem. The toy example in examples/modules/widgets/plot_3_recsort_gallery.py works fine for me. This problem only occurs in functions in plot_3_recsort_gallery.py but the galleries 1 and 2 work fine from the sorting and recording loaded by the auto sorting extractor.

input snippet:

from spikeforest2_utils import AutoRecordingExtractor, AutoSortingExtractor
import kachery as ka
import spikeinterface.widgets as sw
import pickle

universal_sorting_path = ka.store_file(sorting_out_short)

recording = AutoRecordingExtractor(recording_path, download=False)
sorting_true = AutoSortingExtractor(universal_sorting_path)

print(recording.get_channel_locations())
print(sorting_true.get_unit_ids())

with open('plot_objs.pkl', 'wb') as output:
    #WORKS FINE:   
    w_ts = sw.plot_timeseries(recording)
    pickle.dump(w_ts, output, pickle.HIGHEST_PROTOCOL)
       
    #WORKS FINE:
    w_rs = sw.plot_rasters(sorting_true, sampling_frequency=fs)
    pickle.dump(w_rs, output, pickle.HIGHEST_PROTOCOL)

    #DOES NOT WORK:
    w_wf = sw.plot_unit_waveforms(recording, sorting_true, max_spikes_per_unit=100)
    pickle.dump(w_wf, output, pickle.HIGHEST_PROTOCOL)
    
    

output
File "/usr/local/lib/python3.8/dist-packages/spikewidgets/widgets/unitwaveformswidget/unitwaveformswidget.py", line 221, in _do_plot random_wf = st.postprocessing.get_unit_waveforms(recording=self._recording, sorting=self._sorting, File "/usr/local/lib/python3.8/dist-packages/spiketoolkit/postprocessing/postprocessing_tools.py", line 131, in get_unit_waveforms if not recording.check_if_dumpable(): File "/usr/local/lib/python3.8/dist-packages/spikeextractors/baseextractor.py", line 310, in check_if_dumpable return _check_if_dumpable(self.make_serialized_dict()) File "/usr/local/lib/python3.8/dist-packages/spikeextractors/baseextractor.py", line 59, in make_serialized_dict 'key_properties': self._key_properties, 'version': imported_module.__version__, AttributeError: module 'spikeforest2_utils' has no attribute '__version__'

Temporal structure of recordings

Does the NumpyRecordingExtractor allow different trials from the same recording session. Should one just concatenate all the trials and keep track of the boundaries separately?

It appears that different formats allow different levels of information - is it possible to add some documentation on the limitations/information loss when converting from one format to another?

Spikinterface has no attributes

I first tried to run spiketoolkit without the spikeinterface meta package, just installed separately. I tried to run the command
import spiketoolkit as st
st.sorters.available_sorters()

and got the error

AttributeError: module 'spiketoolkit' has no attribute 'sorters'

I found an open issue in spiketoolkit that suggested I try running spikeinterface instead as it is now packaged this way.

I tried this and got the same thing:

import spikeinterface as si 
si.sorters.available_sorters()

AttributeError: module 'spikeinterface' has no attribute 'sorters'

It also seems that spikeinterface has no attributes that work (sorters, extractors, etc.). However, when I try it in Spyder IDE, it suggests all of them.
SI_attributeerror

Working with .mcd files

Is it possible to open .mcd (Multi Channel Systems)or its raw binary variant (converted using MC Data Tool) in spikeinterface.
I saw the mcsh5 extractor. What are the steps to use the .mcd or converted raw file using this extractor?

Getting OSError: [Errno 8] Exec format error when using ss.run_klusta

Hi,
I'm not sure whether here is the right place to report my question.
When I try to execute the ss.run_klusta on my data.
I got OSError: [Errno 8] Exec format error.
I'm using Pycharm 2020.1 Pro. Ed. on macOS Mojave for my data analysis.
My current python version is 3.7.7
I attached the full error messages below.
Hope someone knows how to fix the problem.
And I have no problem running ss.run_mountainsort4 or ss.run_herdingspikes.

Best,
Ming-Ching

Screen Shot 2020-04-10 at 15 40 00

Error when running Ironclust on Mac

After watching the video and following the instructions using spiketutorials, I've run into an error running ironclust.

_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/laurencrew/opt/miniconda3/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
r = call_item()
File "/Users/laurencrew/opt/miniconda3/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 285, in call
return self.fn(*self.args, **self.kwargs)
File "/Users/laurencrew/opt/miniconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 595, in call
return self.func(*args, **kwargs)
File "/Users/laurencrew/opt/miniconda3/lib/python3.8/site-packages/joblib/parallel.py", line 262, in call
return [func(*args, **kwargs)
File "/Users/laurencrew/opt/miniconda3/lib/python3.8/site-packages/joblib/parallel.py", line 262, in
return [func(*args, **kwargs)
File "/Users/laurencrew/opt/miniconda3/lib/python3.8/site-packages/spikesorters/ironclust/ironclust.py", line 225, in _run
raise Exception('ironclust returned a non-zero exit code')
Exception: ironclust returned a non-zero exit code
"""

The above exception was the direct cause of the following exception:

Exception Traceback (most recent call last)
~/opt/miniconda3/lib/python3.8/site-packages/spikesorters/basesorter.py in run(self, raise_error, parallel, n_jobs, joblib_backend)
159 else:
--> 160 Parallel(n_jobs=n_jobs, backend=joblib_backend)(
161 delayed(self._run)(rec.dump_to_dict(), output_folder)

~/opt/miniconda3/lib/python3.8/site-packages/joblib/parallel.py in call(self, iterable)
1053 with self._backend.retrieval_context():
-> 1054 self.retrieve()
1055 # Make sure that we get a last message telling us we are done

~/opt/miniconda3/lib/python3.8/site-packages/joblib/parallel.py in retrieve(self)
932 if getattr(self._backend, 'supports_timeout', False):
--> 933 self._output.extend(job.get(timeout=self.timeout))
934 else:

~/opt/miniconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
541 try:
--> 542 return future.result(timeout=timeout)
543 except CfTimeoutError as e:

~/opt/miniconda3/lib/python3.8/concurrent/futures/_base.py in result(self, timeout)
438 elif self._state == FINISHED:
--> 439 return self.__get_result()
440 else:

~/opt/miniconda3/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
387 if self._exception:
--> 388 raise self._exception
389 else:

Exception: ironclust returned a non-zero exit code

During handling of the above exception, another exception occurred:

SpikeSortingError Traceback (most recent call last)
in
1 # run spike sorting by group
----> 2 sorting_IC = ss.run_ironclust(recording_cache,
3 output_folder='results_split_ic',
4 grouping_property='group', parallel=True, verbose=True)
5 print(f'IronClust found {len(sorting_IC.get_unit_ids())} units')

~/opt/miniconda3/lib/python3.8/site-packages/spikesorters/sorterlist.py in run_ironclust(*args, **kwargs)
374 The spike sorted data
375 """
--> 376 return run_sorter('ironclust', *args, **kwargs)
377
378

~/opt/miniconda3/lib/python3.8/site-packages/spikesorters/sorterlist.py in run_sorter(sorter_name_or_class, recording, output_folder, delete_output_folder, grouping_property, parallel, verbose, raise_error, n_jobs, joblib_backend, **params)
88 verbose=verbose, delete_output_folder=delete_output_folder)
89 sorter.set_params(**params)
---> 90 sorter.run(raise_error=raise_error, parallel=parallel, n_jobs=n_jobs, joblib_backend=joblib_backend)
91 sortingextractor = sorter.get_result()
92

~/opt/miniconda3/lib/python3.8/site-packages/spikesorters/basesorter.py in run(self, raise_error, parallel, n_jobs, joblib_backend)
167 except Exception as err:
168 if raise_error:
--> 169 raise SpikeSortingError(f"Spike sorting failed: {err}. You can inspect the runtime trace in "
170 f"the {self.sorter_name}.log of the output folder.'")
171 else:

SpikeSortingError: Spike sorting failed: ironclust returned a non-zero exit code. You can inspect the runtime trace in the ironclust.log of the output folder.'

Spike validation

Hello, I am trying to use the validation toolkit from Spike interface. I use Kilosort for the spike sorting and then manual curation with phy. I was wondering if I use the validation toolkit, does it check the clusters which were marked as ‘good’ by Kilosort, or does it use the ‘good’ clusters which were marked manually in phy, or both?

Error with using neo based spike extractors

Hello. I have been working to get spikeinterface up and running, but am running into an issue with running the NeuralyxRecordingExtractor. To install the packages, I cloned everything from each repository, ran the setup.py in every folder. Initially when I attempted to run NeuralyxRecordingExtractor I got an error that I needed to install neo, which was easily solved by doing so. But I can't seem to get past the following error:

FileNotFoundError Traceback (most recent call last)
in
----> 1 recording = se.NeuralynxRecordingExtractor(file_name='C:/Users/abranch6/Desktop/spikewhatever/data/883-11_RawData_300s.nrd', error_checking=False)

c:\programdata\anaconda3\lib\site-packages\spikeextractors-0.9.1-py3.7.egg\spikeextractors\extractors\neoextractors\neobaseextractor.py in init(self, **kargs)
51 def init(self, **kargs):
52 RecordingExtractor.init(self)
---> 53 _NeoBaseExtractor.init(self, **kargs)
54
55 # TODO propose a meachanisim to select the appropriate channel groups

c:\programdata\anaconda3\lib\site-packages\spikeextractors-0.9.1-py3.7.egg\spikeextractors\extractors\neoextractors\neobaseextractor.py in init(self, block_index, seg_index, **kargs)
27 neoIOclass = eval('neo.rawio.' + self.NeoRawIOClass)
28 self.neo_reader = neoIOclass(**kargs)
---> 29 self.neo_reader.parse_header()
30
31 if block_index is None:

c:\programdata\anaconda3\lib\site-packages\neo\rawio\baserawio.py in parse_header(self)
149
150 """
--> 151 self._parse_header()
152 self._group_signal_channel_characteristics()
153

c:\programdata\anaconda3\lib\site-packages\neo\rawio\neuralynxrawio.py in _parse_header(self)
83 event_annotations = []
84
---> 85 for filename in sorted(os.listdir(self.dirname)):
86 filename = os.path.join(self.dirname, filename)
87

FileNotFoundError: [WinError 3] The system cannot find the path specified: ''

I have made sure that everything is on my python path, and checked all the permissions on all the folders, and can't think of what else to do. Do you have any suggestions of what the problem might be? Thank you in advance!!

How are unit_ids linked channel_ids between recordings and sortings

Hello,

I am trying to build a sorting extractor for MCS H5 files. I am able to extract the spiketrains by each channel, and assign each of these a unit_id (I am assuming at most one 'unit' per channel). I am a little stuck on how to link these spiketrains to the recordings.

For example, if I want to use:
st.postprocessing.get_unit_waveforms(recordings, sorting, ms_before=1, ms_after=2)

Then I would think that the sorting object should have some attribute 'channel_ids' to know which recording channel to pull the waveforms from. However, it seems that the only think necessary for building a sorting object are two arrays, for the timestamps, and for the unit_ids.

I am wondering what I need to add, to be able to run get_unit_waveforms( ). For example, should I use:
set_unit_property(unit_ids, 'channel_ids', channel_ids)

I can't figure out how they handle this in other sorting extractors.

My sorting object currently looks something like below when printed...

unit_ids
[ 1 2 3 4 5 6 7 8 9 10]
spike_indexes
[ 812 2608 2861 3719 4900 5062 5388 5575 6418 7484
8207 9992 12209 15112 15706 16533 17435 17508 17850 21099
21407 24460 25028 26140 26850 26882 27180 27526 29174 30143
34677 37132 39569 40039 41176 41849 41998 43455 43933 44085
44389 47825 49483 51557 52088 53824 54346 54985 55880 56561
62242 63385 63622 63994 65198 69223 71343 72158 73611 77606
78189 81099 81670 82169 82342 82533 83289 83419 83721 84226
85528 91341 92214 100190 101696 101822 106568 106773 107013 107834
108885 108963 109649 110943 111988 112289 113718 115033 116184 116213
118819 119347 119899 120993 121854 122173 123675 124399 126162 127218
129536 129987 130467 132251 132611 134788 135547 136121 136779 138021
139073 140821 144269 150008 150631 152495 152691 152943 153729 153851
154128 155461 159224 159259 163465 164645 164924 167211 169220 171018
171173 171643 171767 172726 172990 173142 176693 177834 179768 179957
180308 181236 183641 185598 187117 188982 193769 194394 194758 197106
197328 199704 201615 203193 203278 204636 204985 208187 208581 208688
208846 210645 211688 215058 216044 216159 218114 218586 219213 219257
219572 220332 225000 225127 229581 230588 231174 231203 231835 233794
233998 234159 234303 235093 237431 239748 240746 244816 246015 246230
246717 251757 252416 254892 257437 259480 260847 261325 261425 261455
261577 261924 262372 264143 264723 265318 266451 267394 271511 272653
273359 274751 275017 276347 276691 277679 280604 282775 283236 283893
284211 284658 284721 286441 286916 287368 287492 290390 290515 291018
293876 294447 294773 295390 295827 297622 298587]
spike_labels
[ 8 10 8 5 6 8 8 3 6 10 10 6 3 3 3 10 10 7 5 6 7 5 4 1
10 5 2 8 8 6 9 4 5 8 9 4 8 4 2 5 2 6 2 7 3 9 5 4
6 4 4 4 8 4 2 3 7 7 5 6 9 5 9 5 9 10 4 4 8 2 5 7
10 8 3 3 10 10 6 2 2 6 3 2 3 7 7 1 5 1 4 2 10 4 10 5
5 1 5 1 9 2 9 1 8 7 7 7 6 5 4 3 2 3 3 3 2 8 3 2
10 4 1 10 1 1 1 5 9 2 9 1 4 6 6 8 4 9 9 3 8 7 3 6
8 8 1 2 1 9 9 10 5 5 8 10 5 8 8 5 5 8 7 4 1 10 5 5
7 4 4 5 3 3 9 4 5 10 9 1 3 1 1 3 8 1 10 6 8 6 9 7
2 6 6 3 6 9 1 4 6 9 3 2 2 7 10 10 4 6 10 10 2 2 2 1
9 7 3 4 4 8 6 4 8 9 10 7 7 8 2 1 3 9 2 10 7]
sampling_frequency
[30000.]

AttributeError: module 'spikeinterface.extractors' has no attribute

Hi there!

Let me first say that I really appreciate you creating a python based framework to work with electrophysiology as I have had an easier time getting into python than MATLAB in the early stages of my phd.

I am trying to get spikeinterface to work for the very first time, and created a new environment and did the pip install spikeinterface like described. Wanting to try an example code snippet from your paper on my own data, I get an AttributeError both in iPython and Jupyter Notebook. The code snippet I am testing is

image

Resulting in an AttributeError: module 'spikeinterface.extractors' has no attribute 'MyFormatRecordingExtractor'.

Also, when trying to run one of the examples here on the GitHub site on a different computer, I also get an AttributeError from the same module, but this time finding that spikeinterface.extractors has no attribute "example_dataset".

What am I doing wrong? Appreciate whatever input you can provide.

Where and when to build the doc ? (and how to cache)

Building the doc will ask for many CPU and will take some time.
More than test.

We should decide where and when to build the full documentation and how to cache for a next build.

Sphinx gallery cache the build, so only modiyed examples are rebuild. But since it also depend on 5 packages sometimes a full build should be done.
We should discuss a stragery for this doc build.

blackrock import error

Hello, I am having a problem when trying to import a blackrock .ns6 file. I have the latest version of neo installed. The error I get is pasted below. Any help would be greatly appreciated, thank you!

In [19]: filename =r'F:\Data\SignalCheck\TY20210308\TY20210308_signalCheck_morning.ns6'

In [20]: recording = se.BlackrockRecordingExtractor(filename)

AttributeError Traceback (most recent call last)
~\anaconda3\lib\site-packages\spikeextractors-0.9.5-py3.8.egg\spikeextractors\extractors\neoextractors\neobaseextractor.py in init(self, block_index, seg_index, **kargs)
62 # Neo >= 0.9.0
---> 63 channel_indexes_list = self.neo_reader.get_group_signal_channel_indexes()
64 except AttributeError:

AttributeError: 'BlackrockRawIO' object has no attribute 'get_group_signal_channel_indexes'

During handling of the above exception, another exception occurred:

AttributeError Traceback (most recent call last)
in
----> 1 recording = se.BlackrockRecordingExtractor(filename)

~\anaconda3\lib\site-packages\spikeextractors-0.9.5-py3.8.egg\spikeextractors\extractors\neoextractors\blackrockextractor.py in init(self, filename, nsx_to_load, block_index, seg_index, **kwargs)
34 def init(self, filename: PathType, nsx_to_load: Optional[int] = None, block_index: Optional[int] = None,
35 seg_index: Optional[int] = None, **kwargs):
---> 36 super().init(filename=filename, nsx_to_load=nsx_to_load,
37 block_index=block_index, seg_index=seg_index, **kwargs)
38

~\anaconda3\lib\site-packages\spikeextractors-0.9.5-py3.8.egg\spikeextractors\extractors\neoextractors\neobaseextractor.py in init(self, block_index, seg_index, **kargs)
64 except AttributeError:
65 # Neo < 0.9.0
---> 66 channel_indexes_list = self.neo_reader.get_group_channel_indexes()
67 num_chan_group = len(channel_indexes_list)
68 assert num_chan_group == 1, 'This file have several channel groups spikeextractors support only one groups'

AttributeError: 'BlackrockRawIO' object has no attribute 'get_group_channel_indexes'

stall in mountainsort?

Hi Spikeinterface team,
I'm psyched to transition to spikeinterface in the lab's pipeline, but I'm having trouble getting mountainsort4 to run correctly. I've created a conda environment with the bare essentials and I've tried both pip install as well as git clone of the various repositories - in both cases, I've found two problems.

First, there's an incompatibility in a variety of function names in /ml_ms4alg/ms4alg.py; the code calls for recording.GetNumFrames() but the function associated with recording is recording.get_num_frames(). There are a lot of these types of switches. But I was able to spend an hour fixing them and now nothing crashes. Which leads to a deeper problem:

(Second) The program seems to stall out forever. I've run this a few ways. If I use mountainsort through spikeinterface (e.g. sorting_MS4 = ss.run_mountainsort4(recording=recording_cmr, detect_threshold=3) ), the program runs, but catches and seems to wait indefinitely while computing PCA features for a channel. This seems stochastic? It'll get through a few channels and then hang - not always on the same one. I think this has something to do with /mountainlab/lib/python3.6/threading.py

I'm not sure how to proceed at this point - the mda extractor works well, as does preprocessing. we just can't get run_mountainsort4 to complete.

Thanks in advance. Please let me know if you need any more information.

  • keith

Getting stuck on the toy example of sorting

I was running the mountainsort4 with python version3.6 with anaconda environment. When I went through the toy example provided on

https://spikeinterface.readthedocs.io/en/latest/getting_started/plot_getting_started.html

I got stuck on the sorting step. It return the message


  File "/home/chang/anaconda3/envs/myenv1/lib/python3.6/site-packages/spikesorters/mountainsort4/mountainsort4.py", line 110, in _run
    verbose=self.verbose

TypeError: mountainsort4() got an unexpected keyword argument 'verbose'

and can't proceed

Do you know how to solve this? I am a user of previous version ml3 and would like to use some new feature from spikeinterface to extract the spike waveform.

get_unit_waveforms not taking params

I'm trying to get waveforms from a sorting and need 1) only one channel instead of all channels, 2) +/- 1 ms instead of +/-3, and 3) more spikes than the default 1000. I'm trying two different ways of passing the params to get_unit_waveforms but the output is always the same (e.g. waveforms[0].shape = (1000, 32, 180) instead of e.g. (10000,1,60). Am I doing something wrong?

image

Real dataset for example

For comparison example I propose to include real dataset.
We could include in the git all the npz for one mearec with ks, ks2, tdc, hs.
It should quite light (I hope) and we could make more fancy plot to explain concept about GT comaprison (hungarian vs best, ...)

Documentation time outs

It seems like the read the docs times out quite often. Any ideas how to fix this?

@samuelgarcia Can you rebuild the current documentation? I made some markdown changes.

If I understand correctly, here you will add a self.reducer corresponding to the average or the median depending of the discussion on the other PR ? Once I have a feedback of my lab, I can post you the answer here if you want.

Once I have a feedback of my lab, I can post you the answer here if you want.

Maybe you could also instead of having a reference option have two options :
reference = global, local or single
reducer (or a better word) = average, median
Then the user can select if he wants the average or the median both for a global or a local CAR. What do you think of it ?

Originally posted by @MarineChap in #95 (comment)

Ironclust error

I ran into a strange error. I have been able to run spikeinterface and Ironclust on Windows since I installed it. However, out of nowhere I started receiving this error. I'm not sure how or why this happened as I left it overnight and came back to rerun it with different data and got this error. I appreciate any help you can give me. Thank you.

run spike sorting by group

sorting_IC = ss.run_ironclust(recording_cache,
output_folder='results_split_ic',
grouping_property='group', parallel=True, verbose=True)
print(f'IronClust found {len(sorting_IC.get_unit_ids())} units')

TypeError Traceback (most recent call last)
in
1 # run spike sorting by group
----> 2 sorting_IC = ss.run_ironclust(recording_cache,
3 output_folder='results_split_ic',
4 grouping_property='group', parallel=True, verbose=True)
5 print(f'IronClust found {len(sorting_IC.get_unit_ids())} units')

~\spikesorters\sorterlist.py in run_ironclust(*args, **kwargs)
377 The spike sorted data
378 """
--> 379 return run_sorter('ironclust', *args, **kwargs)
380
381

~\spikesorters\sorterlist.py in run_sorter(sorter_name_or_class, recording, output_folder, delete_output_folder, grouping_property, parallel, verbose, raise_error, n_jobs, joblib_backend, **params)
91 verbose=verbose, delete_output_folder=delete_output_folder)
92 sorter.set_params(**params)
---> 93 sorter.run(raise_error=raise_error, parallel=parallel, n_jobs=n_jobs, joblib_backend=joblib_backend)
94 sortingextractor = sorter.get_result()
95

~\spikesorters\basesorter.py in run(self, raise_error, parallel, n_jobs, joblib_backend)
128 def run(self, raise_error=True, parallel=False, n_jobs=-1, joblib_backend='loky'):
129 for i, recording in enumerate(self.recording_list):
--> 130 self._setup_recording(recording, self.output_folders[i])
131
132 # dump again params because some sorter do a folder reset (tdc)

~\spikesorters\ironclust\ironclust.py in _setup_recording(self, recording, output_folder)
159 dataset_dir = output_folder / 'ironclust_dataset'
160 # Generate three files in the dataset directory: raw.mda, geom.csv, params.json
--> 161 se.MdaRecordingExtractor.write_recording(recording=recording, save_path=str(dataset_dir),
162 n_jobs=p["n_jobs_bin"], chunk_mb=p["chunk_mb"], verbose=self.verbose)
163

TypeError: write_recording() got an unexpected keyword argument 'n_jobs'

Problem running spike sorters

Hello,
When I attempt to run spike sorting I receive the error pasted below. I get the same error when I try to run the matlab based sorters I have been able to install(I have tried kilosort2 and waveclus). Thank you for any help!

In[25]: sorting_IC = ss.run_ironclust(recording_cache, output_folder='results_split_ironclust', grouping_property='group', parallel=True)

exception calling callback for <Future at 0x27ac4bfe9d0 state=finished raised BrokenProcessPool>
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 404, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
File "C:\Users\paula\anaconda3\lib\multiprocessing\queues.py", line 116, in get
return _ForkingPickler.loads(res)
AttributeError: 'BlackrockRawIO' object has no attribute '__read_nsx_header_variant_a'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\externals\loky_base.py", line 625, in _invoke_callbacks
callback(self)
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\parallel.py", line 347, in call
self.parallel.dispatch_next()
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\parallel.py", line 780, in dispatch_next
if not self.dispatch_one_batch(self._original_iterator):
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\parallel.py", line 847, in dispatch_one_batch
self._dispatch(tasks)
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\parallel.py", line 765, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "C:\Users\paula\anaconda3\lib\site-packages\joblib_parallel_backends.py", line 531, in apply_async
future = self._workers.submit(SafeFunction(func))
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\externals\loky\reusable_executor.py", line 177, in submit
return super(_ReusablePoolExecutor, self).submit(
File "C:\Users\paula\anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 1102, in submit
raise self._flags.broken
joblib.externals.loky.process_executor.BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.

Issue With running

Hello

I had this working over the summer - now it stopped working again: here is the code followed by the error:

import spikeinterface.extractors as se
import spikeinterface.toolkit as st
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw
import matplotlib.pylab as plt
import numpy as np
import neo

CODE
reader = neo.NeuralynxIO(dirname='/Users/.../PycharmProjects/SpikeGUI/data')
print(reader)
print(reader.segment_count(0))

rec_list = [se.NeuralynxRecordingExtractor(dirname='/.../PycharmProjects/SpikeGUI/data/', seg_index=i)
for i in range(reader.segment_count(0))]
rec = se.MultiRecordingTimeExtractor(rec_list)

channel_ids = rec.get_channel_ids()
fs = rec.get_sampling_frequency()
num_chan = rec.get_num_channels()

print('Channel ids:', channel_ids)
print('Sampling frequency:', fs)
print('Number of channels:', num_chan)

recording_prb = rec.load_probe_file('custom_probe.prb')
print('Channels after loading the probe file:', recording_prb.get_channel_ids())
print('Channel groups after loading the probe file:', recording_prb.get_channel_groups())

w_elec = sw.plot_electrode_geometry(recording_prb, markersize=50)

print(ss.installed_sorters())
print(ss.get_default_params('mountainsort4'))

rec_f = st.preprocessing.bandpass_filter(recording_prb, freq_min=300, freq_max=6000)

w = sw.plot_timeseries(rec_f, trange=[0, 2])

sorting_KL = ss.run_klusta(recording=rec_f)
st.postprocessing.export_to_phy(rec_f, sorting_KL, output_folder='phy_K')

ERROR
runfile('/Users/.../PycharmProjects/SpikeGUI/nueralynx_extract.py', wdir='/Users/.../PycharmProjects/SpikeGUI')
/Users/.../opt/miniconda3/envs/SpikeGUI/lib/python3.7/site-packages/elephant/pandas_bridge.py:22: DeprecationWarning: pandas_bridge module will be removed in Elephant v0.8.x
DeprecationWarning)
11:27:52 [I] klustakwik KlustaKwik2 version 0.2.6
Backend MacOSX is interactive backend. Turning interactive mode on.
Traceback (most recent call last):
File "/Users/.../opt/miniconda3/envs/SpikeGUI/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
runfile('/Users/.../PycharmProjects/SpikeGUI/nueralynx_extract.py', wdir='/Users/.../PycharmProjects/SpikeGUI')
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/.../PycharmProjects/SpikeGUI/nueralynx_extract.py", line 11, in
reader = neo.NeuralynxIO(dirname='/Users/hemalsemwal/PycharmProjects/SpikeGUI/data')
File "/Users/.../opt/miniconda3/envs/SpikeGUI/lib/python3.7/site-packages/neo/io/neuralynxio.py", line 36, in init
BaseFromRaw.init(self, dirname)
File "/Users/.../opt/miniconda3/envs/SpikeGUI/lib/python3.7/site-packages/neo/io/basefromrawio.py", line 81, in init
self.parse_header()
File "/Users/.../opt/miniconda3/envs/SpikeGUI/lib/python3.7/site-packages/neo/rawio/baserawio.py", line 151, in parse_header
self._parse_header()
File "/Users/.../opt/miniconda3/envs/SpikeGUI/lib/python3.7/site-packages/neo/rawio/neuralynxrawio.py", line 98, in _parse_header
info = read_txt_header(filename)
File "/Users/.../opt/miniconda3/envs/SpikeGUI/lib/python3.7/site-packages/neo/rawio/neuralynxrawio.py", line 682, in read_txt_header
dt1 = re.search(datetime1_regex, txt_header).groupdict()
AttributeError: 'NoneType' object has no attribute 'groupdict'

Installing 0.9.9 spikeinterface installs the wrong version of spikesorters (and other sub packages)

Installing 0.9.9 of spikeinterface causes the installation of the latest version of spikesorters. spikesorters is not fully backwards compatible. In particular we get the error: AttributeError: module 'spikeinterface.sorters' has no attribute 'installed_sorter_list' with the newer version of spikesorters.

We install spikeinterface as an umbrella package, so when we encountered this issue we downgraded spikeinterface to install spikeinterface=0.9.9. However this had the unfortunate consequence of installing spikesorters=0.4.2 rather than spikesorters=0.3.3 as it was when we had originally installed spikeinterface=0.9.9.

If backwards compatibility isn't guaranteed then requirements.txt should contain an upper limit to the install version of the sub packages. Instead of just spikesorters>=0.3.2, it would make more sense to have spikesorters>=0.3.2,<=0.3.3 to ensure installs of 0.9.9 are compatible with code written to 0.9.9.

Supplying sampling_frequency doesn't work in compare_multiple_sorters

The following code doesn't work if the sorters have no sampling_frequency specified, even though sampling_frequency is being passed as argument:

msc = sc.compare_multiple_sorters(sorting_list=[sorting_KL_split, 
                                                sorting_MS4], 
                                  name_list=['KL', 'MS4'],
                                  min_accuracy=0.5,
                                  sampling_frequency=recording.get_sampling_frequency(), 
                                  verbose=True
                                  )

Possible cause:
In MultiSortingComparison._do_matching, the sampling frequency is never passed to SortingComparison

"Permission denied" when using se.NwbRecordingExtractor

Hi everyone,

I am trying to build a new spike sorting pipeline using SpikeInterface.

For my recordings I am using the Open Ephys acqusition system, and I am saving my recorded data in the NWB format.

Lastly, I am using the "NWB_Developer_Breakout_Session_Sep2020" tutorial as a starting point. However, I only get as far as the 2nd cell in the jupyter notebook.

I am running using the 64-bit Windows 10 Enterprise OS, and I am opening the jupyter notebook in the Anaconda Prompt (Miniconda3) with administrative privileges.

Below is the code. I have simply put the experiment_1.nwb file, outputted by the Open Ephys system, together with the settings.xml file in a new folder I have called 'nwb-dataset'. This folder exists in the same folder as the jupyter tutorial notebook (i.e., in the same folder as the 'open-ephys-dataset' folder, and the code runs fine when I run the original code on that folder):

import spikeinterface
import spikeinterface.extractors as se 
import spikeinterface.toolkit as st
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook

-------------------------------------------------------------------------

#recording_folder = 'open-ephys-dataset/'
#recording = se.OpenEphysRecordingExtractor(recording_folder)

recording_folder = 'nwb-dataset/'
recording = se.NwbRecordingExtractor(recording_folder)

and here is the error message:

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-5-5b1002091086> in <module>
      3 
      4 recording_folder = 'nwb-dataset/'
----> 5 recording = se.NwbRecordingExtractor(recording_folder)

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\spikeextractors\extractors\nwbextractors\nwbextractors.py in __init__(self, file_path, electrical_series_name)
    156         se.RecordingExtractor.__init__(self)
    157         self._path = str(file_path)
--> 158         with NWBHDF5IO(self._path, 'r') as io:
    159             nwbfile = io.read()
    160             if electrical_series_name is not None:

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\pynwb\__init__.py in __init__(self, **kwargs)
    244             elif manager is None:
    245                 manager = get_manager()
--> 246         super(NWBHDF5IO, self).__init__(path, manager=manager, mode=mode, file=file_obj, comm=comm)
    247 
    248     @docval({'name': 'src_io', 'type': HDMFIO, 'doc': 'the HDMFIO object for reading the data to export'},

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\backends\hdf5\h5tools.py in __init__(self, **kwargs)
     66         self.__mode = mode
     67         self.__file = file_obj
---> 68         super().__init__(manager, source=path)
     69         self.__built = dict()       # keep track of each builder for each dataset/group/link for each file
     70         self.__read = dict()        # keep track of which files have been read. Key is the filename value is the builder

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\backends\io.py in __init__(self, **kwargs)
     15         self.__built = dict()
     16         self.__source = getargs('source', kwargs)
---> 17         self.open()
     18 
     19     @property

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\backends\hdf5\h5tools.py in open(self)
    682             else:
    683                 kwargs = {}
--> 684             self.__file = File(self.source, open_flag, **kwargs)
    685 
    686     def close(self):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\h5py\_hl\files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, fs_strategy, fs_persist, fs_threshold, **kwds)
    425                                fapl, fcpl=make_fcpl(track_order=track_order, fs_strategy=fs_strategy,
    426                                fs_persist=fs_persist, fs_threshold=fs_threshold),
--> 427                                swmr=swmr)
    428 
    429             if isinstance(libver, tuple):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\h5py\_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
    188         if swmr and swmr_support:
    189             flags |= h5f.ACC_SWMR_READ
--> 190         fid = h5f.open(name, flags, fapl=fapl)
    191     elif mode == 'r+':
    192         fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py\_objects.pyx in h5py._objects.with_phil.wrapper()

h5py\_objects.pyx in h5py._objects.with_phil.wrapper()

h5py\h5f.pyx in h5py.h5f.open()

OSError: Unable to open file (unable to open file: name = 'nwb-dataset/', errno = 13, error message = 'Permission denied', flags = 0, o_flags = 0)

If I instead use the following code:

#recording_folder = 'open-ephys-dataset/'
#recording = se.OpenEphysRecordingExtractor(recording_folder)

recording_folder = 'nwb-dataset/experiment_1.nwb'
recording = se.NwbRecordingExtractor(recording_folder)

I get the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-5-f36c0438bc0a> in <module>
      3 
      4 recording_folder = 'nwb-dataset/experiment_1.nwb'
----> 5 recording = se.NwbRecordingExtractor(recording_folder)

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\spikeextractors\extractors\nwbextractors\nwbextractors.py in __init__(self, file_path, electrical_series_name)
    157         self._path = str(file_path)
    158         with NWBHDF5IO(self._path, 'r') as io:
--> 159             nwbfile = io.read()
    160             if electrical_series_name is not None:
    161                 self._electrical_series_name = electrical_series_name

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\backends\hdf5\h5tools.py in read(self, **kwargs)
    412                                        % (self.source, self.__mode))
    413         try:
--> 414             return call_docval_func(super().read, kwargs)
    415         except UnsupportedOperation as e:
    416             if str(e) == 'Cannot build data. There are no values.':  # pragma: no cover

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in call_docval_func(func, kwargs)
    403 def call_docval_func(func, kwargs):
    404     fargs, fkwargs = fmt_docval_args(func, kwargs)
--> 405     return func(*fargs, **fkwargs)
    406 
    407 

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\backends\io.py in read(self, **kwargs)
     34             # TODO also check that the keys are appropriate. print a better error message
     35             raise UnsupportedOperation('Cannot build data. There are no values.')
---> 36         container = self.__manager.construct(f_builder)
     37         return container
     38 

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\build\manager.py in construct(self, **kwargs)
    236                 # we are at the top of the hierarchy,
    237                 # so it must be time to resolve parents
--> 238                 result = self.__type_map.construct(builder, self, None)
    239                 self.__resolve_parents(result)
    240             self.prebuilt(result, builder)

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\build\manager.py in construct(self, **kwargs)
    850         if build_manager is None:
    851             build_manager = BuildManager(self)
--> 852         obj_mapper = self.get_map(builder)
    853         if obj_mapper is None:
    854             dt = builder.attributes[self.namespace_catalog.group_spec_cls.type_key()]

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\build\manager.py in get_map(self, **kwargs)
    770             data_type = self.get_builder_dt(obj)
    771             namespace = self.get_builder_ns(obj)
--> 772             container_cls = self.get_cls(obj)
    773         # now build the ObjectMapper class
    774         mapper = self.__mappers.get(container_cls)

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\utils.py in func_call(*args, **kwargs)
    559             def func_call(*args, **kwargs):
    560                 pargs = _check_args(args, kwargs)
--> 561                 return func(args[0], **pargs)
    562         else:
    563             def func_call(*args, **kwargs):

c:\users\julie\miniconda3\envs\spiketutorial\lib\site-packages\hdmf\build\manager.py in get_cls(self, **kwargs)
    699         data_type = self.get_builder_dt(builder)
    700         if data_type is None:
--> 701             raise ValueError("No data_type found for builder %s" % builder.path)
    702         namespace = self.get_builder_ns(builder)
    703         if namespace is None:

ValueError: No data_type found for builder root

Thank you in advance, and please let me know if you require any further information from me.

Best,
Nils

Getting 'numpy/arrayobject.h' file not found when installing klustakwik2

Hello,

I'm trying to set up an environment for spikeinterface, however I'm running into an error when trying to install klustakwik2. Upon running pip install klustakwik2 I get klustakwik2/numerics/cylib/compute_cluster_masks_cy.c:613:10: fatal error: 'numpy/arrayobject.h' file not found.

Obviously, this is a bit of an issue since most of the examples are using klusta. I've looked into it and it seems that include_dirs=[numpy.get_include()] in setup.py gets ignored somehow. I've tried a few things but nothing seems to work... Any suggestions?

By the way, I'm running python 3.7.7 on MacOS and I'm installing the package in a virtual environment (venv).

Here is the full error message:

(SpikeInterface) macbook-pro-4:SpikeInterface fvcoen$ pip install klustakwik2
    [...]
    running build_ext
    building 'klustakwik2.numerics.cylib.compute_cluster_masks_cy' extension
    creating build/temp.macosx-10.14-x86_64-3.7
    creating build/temp.macosx-10.14-x86_64-3.7/klustakwik2
    creating build/temp.macosx-10.14-x86_64-3.7/klustakwik2/numerics
    creating build/temp.macosx-10.14-x86_64-3.7/klustakwik2/numerics/cylib
    clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/fvcoen/Code/Fresh/include -I/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/include/python3.7m -c klustakwik2/numerics/cylib/compute_cluster_masks_cy.c -o build/temp.macosx-10.14-x86_64-3.7/klustakwik2/numerics/cylib/compute_cluster_masks_cy.o
    klustakwik2/numerics/cylib/compute_cluster_masks_cy.c:613:10: fatal error: 'numpy/arrayobject.h' file not found
    #include "numpy/arrayobject.h"
             ^~~~~~~~~~~~~~~~~~~~~
    1 error generated.
    running install
    running build
    running build_py
    running egg_info
    writing klustakwik2.egg-info/PKG-INFO
    writing dependency_links to klustakwik2.egg-info/dependency_links.txt
    writing entry points to klustakwik2.egg-info/entry_points.txt
    writing top-level names to klustakwik2.egg-info/top_level.txt
    reading manifest file 'klustakwik2.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    warning: no files found matching '*.pyxbld' under directory 'klustakwik2/numerics/cylib'
    warning: no previously-included files matching '__pycache__' found under directory '*'
    warning: no previously-included files matching '*.py[co]' found under directory '*'
    writing manifest file 'klustakwik2.egg-info/SOURCES.txt'
    running build_ext
    building 'klustakwik2.numerics.cylib.compute_cluster_masks_cy' extension
    clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/fvcoen/Code/Fresh/include -I/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/include/python3.7m -c klustakwik2/numerics/cylib/compute_cluster_masks_cy.c -o build/temp.macosx-10.14-x86_64-3.7/klustakwik2/numerics/cylib/compute_cluster_masks_cy.o
    klustakwik2/numerics/cylib/compute_cluster_masks_cy.c:613:10: fatal error: 'numpy/arrayobject.h' file not found
    #include "numpy/arrayobject.h"
             ^~~~~~~~~~~~~~~~~~~~~
    1 error generated.
    error: command 'clang' failed with exit status 1
    ----------------------------------------
ERROR: Command errored out with exit status 1: /Users/fvcoen/Code/Fresh/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/36/95fz5r4d181c_hfh6014lvnr0000gn/T/pip-install-jnxjfbjk/klustakwik2/setup.py'"'"'; __file__='"'"'/private/var/folders/36/95fz5r4d181c_hfh6014lvnr0000gn/T/pip-install-jnxjfbjk/klustakwik2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/36/95fz5r4d181c_hfh6014lvnr0000gn/T/pip-record-srjiswrq/install-record.txt --single-version-externally-managed --compile --install-headers /Users/fvcoen/Code/Fresh/include/site/python3.7/klustakwik2 Check the logs for full command output.

Thanks in advance for your help!

Automatic citation maker

I was discussing with another open source developer who is implementing an automatic citation creator for people that use the framework.

I think we can do something similar with SpikeInterface where the citations can be grabbed for different file formats or spike sorters from some internal property that stores citation formats (bibtex, etc.). I can try to do something like this for spikeextractors first maybe..

questions and phy export error

Hello, I have been learning how to use spike interface from the tutorials, and I have a few questions. I want to import data from blackrock and save as nwb. (1) When I open the saved NWB file, I see that some of the metadata is left blank or with defaults. Is there a way I can go in, either when I am writing the file, or after its been written, and rewrite the values for the metadata in the various objects that were imported? For example, I would like for the device, electrode group, and subject information metadata to be put in the file, how can I specify this? (2) In the tutorial there is a line that references a metadata field in recordingExtractor.write_recording. I cant seem to find any documentation on this, could you please elaborate on how this is used? For example, can I use this to add all the metadata that would be entered in the file if I called pynwb.NWBFile? What exactly is the format for the metadata argument supposed to be? I tried the examples, and they don't seem to work for me?

Finally, I have an error that occurs when I try to export a sort to phy for manual curation. I run this command (which basically comes from the tutorial, but where I have added my own data instead of the sample):

st.postprocessing.export_to_phy(recording_cache,
agreement_sorting, output_folder='phy_AGR',
grouping_property='group', verbose=True, recompute_info=True)

It produces this output:

Converting to Phy format
Disabling 'max_channels_per_template'. Channels are extracted using 'grouping_property'
Number of chunks: 9 - Number of jobs: 1

Extracting waveforms in chunks: 100%|##########| 9/9 [00:26<00:00, 2.95s/it]

Fitting PCA of 3 dimensions on 219639 waveforms
Projecting waveforms on PC
Saving files
Saved phy format to: /media/paul/storage/Python/phy_AGR
Run:

phy template-gui /media/paul/storage/Python/phy_AGR/params.py

But when I try to run phy, I get the following error pasted below. I verified that my installation of phy is working by opening a dataset exported directly from kilosort3. This opens without error as expected. I am running Linux Ubuntu 18.04

An error has occurred (AssertionError):
Traceback (most recent call last):
File "/home/paul/anaconda3/envs/phy2/bin/phy", line 8, in
sys.exit(phycli())
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/click/core.py", line 1025, in call
return self.main(*args, **kwargs)
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/click/core.py", line 955, in main
rv = self.invoke(ctx)
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/click/core.py", line 1517, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/click/core.py", line 1279, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/click/core.py", line 710, in invoke
return callback(*args, **kwargs)
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/click/decorators.py", line 18, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/phy/apps/init.py", line 159, in cli_template_gui
template_gui(params_path, **kwargs)
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/phy/apps/template/gui.py", line 198, in template_gui
controller = TemplateController(model=load_model(params_path), dir_path=dir_path, **kwargs)
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/phylib/io/model.py", line 1178, in load_model
return TemplateModel(**get_template_params(params_path))
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/phylib/io/model.py", line 297, in init
self._load_data()
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/phylib/io/model.py", line 356, in _load_data
self.sparse_templates = self._load_templates()
File "/home/paul/anaconda3/envs/phy2/lib/python3.7/site-packages/phylib/io/model.py", line 652, in _load_templates
assert cols.shape == (n_templates, n_channels_loc)
AssertionError

Thank you for any help!

Problem using mountainsort on Ubuntu

Hi everyone,
previously in our lab, we used mountainsort to sort our recorded data, currently we are trying to switch to spike interface. I’m having issues running mountainsort with spikeinterface. To show you the problem I used your example code “plot_1_sorters_example.py”.
If I run mountainsort like this: sorting_MS4 = ss.run_mountainsort4(recording=recording, **default_ms4_params, output_folder='tmp_MS4', raise_error = False) I get the following error (Here you can see the extended traceback: test_mountainsort with raise_error= False.txt):

File "/home/fra/anaconda3/lib/python3.8/site-packages/spikesorters/basesorter.py", line 259, in get_result
raise SpikeSortingError(f"None of the sorting outputs could be loaded")
spikesorters.sorter_tools.SpikeSortingError: None of the sorting outputs could be loaded.

Similarly, I run it with the option with the raise_error = True option, I get this (Here you can see the extended traceback: test_mountainsort with raise_error= True):

RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if name == 'main':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.

Mountainsort was previously installed via pip in the manner suggested in the manual: pip install ml_ms4alg
I have tried different versions of ubuntu (16 LTS and 20 LTS) and WSL2 and created different environments in conda, but I always get the same errors. I tried to use other sorting systems (only Herdingspikes2, SpykingCircus and Tridesclous for the moment) and they worked fine the problem so it seems specific to mountainsort.

test_mountainsort with raise_error= False.txt
test_mountainsort with raise_error= True.txt
plot_1_sorters_example.txt

Start metapackage

@alejoe91 @colehurwitz
I have started this.
This will work when spikecomparison will be released at least once.
Could you have a look ?

I did not make branch at the moment.
Shal we move some doc here ?

mountainsort4 running problems

Hello,

I installed SpikeInterface and mountainsort4 but am having issues running it. Following the tutorial's sorting_MS4 = ss.run_mountainsort4(recording=recording, detect_threshold=6) line, I get two errors at the point where sorting = ml_ms4alg.mountainsort4:. First it was that ml_ms4alg.mountainsort4 got an unexpected keyword "verbose" (fixed this manually by inserting a dummy verbose=False in ml_ms4alg.mountainsort4 (def mountainsort4(*,recording,detect_sign,clip_size=50,adjacency_radius=-1,detect_threshold=3,detect_interval=10,num_workers=None,verbose=False):). After fixing this, the second error is that AttributeError: 'WhitenRecording' object has no attribute 'getChannelIds' Any idea why this is happening?
image

Missing dependency

When installing SI from this location, the requests package is not installed.
spikesorters/ironclust/mdaio.py wants it.

Q: is release coming?

ATM at 0.9.1-39-g6da0e53 with 0.9.1 released a good number of months back. Just wanted to check if a new release being brewed to possibly chase fresh release of the spikextractors? ;)

Slow curation and export

I'm having trouble with curation and export to Phy steps taking a really long time. I'm sorting a 45-minute recording with 32 channels using Mountainsort4. The sorting takes ~11 minutes. Curating based on firing rates took several seconds, but curating based on SNR took 6800 seconds and the export to Phy step is taking several hours. I'm running this on a local machine (16-Gb RAM i7 MacBook Pro). Any idea how to improve this performance? (I'm also running the same code on a cluster, and those jobs don't finish after running for 12 hours...)

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.