Git Product home page Git Product logo

Comments (9)

alejoe91 avatar alejoe91 commented on June 13, 2024

Hi! Can you paste the code that you are running?

from spikeinterface.

paul-aparicio avatar paul-aparicio commented on June 13, 2024

Thanks for your response, sorry for not being clearer, this was the command that elicited the error:

In[25]: sorting_IC = ss.run_ironclust(recording_cache, output_folder='results_split_ironclust', grouping_property='group', parallel=True)

from spikeinterface.

alejoe91 avatar alejoe91 commented on June 13, 2024

Also the previous part please :)

What operating system are you using?

from spikeinterface.

paul-aparicio avatar paul-aparicio commented on June 13, 2024

Ahh. I was basically running through this tutorial to learn (https://github.com/SpikeInterface/spiketutorials/blob/master/NWB_Developer_Breakout_Session_May2020/SpikeInterface_Tutorial.ipynb), but I changed it to my data. Iā€™m running this on a Windows machine running Windows 10 on an anaconda installation with Jupiter. The non plotting commands are pasted below:

import spikeinterface
import spikeinterface.extractors as se
import spikeinterface.toolkit as st
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw
import matplotlib.pyplot as plt
import numpy as np

rawdatafile = 'TY20210308_signalCheck_morning.ns6'
recording = se.BlackrockRecordingExtractor(rawdatafile)

recording_prb = recording.load_probe_file('TY_array.prb')
recording_f = st.preprocessing.bandpass_filter(recording_prb, freq_min=250, freq_max=7500)
recording_rm_noise = st.preprocessing.remove_bad_channels(recording_f, bad_channel_ids=[34, 50, 52, 67, 81])
recording_cmr = st.preprocessing.common_reference(recording_rm_noise, reference='median')
recording_cache = se.CacheRecordingExtractor(recording_cmr)
recording_cache.move_to('filtered.dat')
recording_cache.dump_to_pickle('recording.pkl')

ss.IronClustSorter.set_ironclust_path('F:/MLToolBoxes/ironclust')
Setting IRONCLUST_PATH environment variable for subprocess calls to: F:\MLToolBoxes\ironclust

ss.Kilosort2Sorter.set_kilosort2_path('F:/MLToolBoxes/Kilosort-2.0/')
ss.Kilosort3Sorter.set_kilosort3_path('F:/MLToolBoxes/Kilosort/')
ss.WaveClusSorter.set_waveclus_path('F:/MLToolBoxes/wave_clus/')
Setting KILOSORT2_PATH environment variable for subprocess calls to: F:\MLToolBoxes\Kilosort-2.0
Setting KILOSORT3_PATH environment variable for subprocess calls to: F:\MLToolBoxes\Kilosort
Setting WAVECLUS_PATH environment variable for subprocess calls to: F:\MLToolBoxes\wave_clus

sorting_IC = ss.run_ironclust(recording_cache, output_folder='results_split_ironclust', grouping_property='group', parallel=True)
print(f'Ironclust found {len(sorting_IC.get_unit_ids())} units')

(as a different issue, the only sorters that show up as installed after the above commands are:
In [24]: ss.installed_sorters()
Out[25]: ['ironclust', 'kilosort2', 'waveclus']

Im not sure why Kilosort3 doesn't install? There was no error output).

from spikeinterface.

paul-aparicio avatar paul-aparicio commented on June 13, 2024

If it matters at all, I tried duplicating this in Linux (Ubuntu 18.04) and I get the same error as above:

AttributeError: 'BlackrockRawIO' object has no attribute '__read_nsx_header_variant_a'

from spikeinterface.

alejoe91 avatar alejoe91 commented on June 13, 2024

Hi! What version of neo and SpikeInterface are you using? The problem seems to be related with the BlackRock input..@samuelgarcia any clue?

from spikeinterface.

paul-aparicio avatar paul-aparicio commented on June 13, 2024

I'm running neo 0.9.0 and spikeextractors 0.9.6
spikeinterface 0.12.0

On Windows, I installed it with pip, on Linux I installed it by downloading from GitHub and running python setup.py install

from spikeinterface.

samuelgarcia avatar samuelgarcia commented on June 13, 2024

This error is strange. Is is seems related to neo and joblib.
Does the same code work with no parralel ?

from spikeinterface.

paul-aparicio avatar paul-aparicio commented on June 13, 2024

Hello! On the linux install, I tried restarting the jupyter kernel and rerunning the code without the parallel variable in the argument list. This indeed worked. I verified this by restarting the jupyter kernel and running the code again with the parallel variable set to True as an argument. The same error as previously described occurred. Thanks for your help.

from spikeinterface.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.