Git Product home page Git Product logo

jadhav-2016-data-analysis's Introduction

Coverage Status

Data Description

Data is from:

Jadhav, S. P., Rothschild, G., Roumis, D. K. & Frank, L. M. Coordinated Excitation and Inhibition of Prefrontal Ensembles during Awake Hippocampal Sharp-Wave Ripple Events. Neuron 90, 113–127 (2016).

Raw Data Format

Data is in the Matlab format (.mat files). See the Loren Frank Data Format Description Repository for more information.

Installation

  1. Install miniconda (or anaconda) if it isn't already installed. Type into bash:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
bash miniconda.sh -b -p $HOME/miniconda
export PATH="$HOME/miniconda/bin:$PATH"
hash -r
  1. Go to the local repository (.../Jadhav-2016-Data-Analysis) and install the anaconda environment for the repository. Type into bash:
conda update -q conda
conda info -a
conda env create -f environment.yml
source activate Jadhav-2016-Data-Analysis
python setup.py develop
  1. Finally, to verify that the code has been installed correctly, run the tests:
pytest

Data Folders

Raw data should be placed in Jadhav-2016-Data-Analysis/Raw-Data. Each animal should have its own folder in the Raw-Data folder and the definition for that animal's folder should be placed in src.parameters.

Data generated by the code will be placed in Jadhav-2016-Data-Analysis/Processed-Data.

jadhav-2016-data-analysis's People

Contributors

edeno avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

jadhav-2016-data-analysis's Issues

Figure out how they do their referencing

There are tetrodes labeled CA1Ref, PFCRef, but it's not clear if they are using both references, one reference, or if this is just a potential marker for reference electrode.

Criterion for neuron exclusion

Currently, I am excluding all CA1 neurons with zero spikes. The problem is that for low spiking neurons, the training data (times when the rat is running > 4 cm/s) itself might not have spikes.

Compute coherence by ripple type

  • spiking vs no spiking replay events coherence
  • first half of the experiment ripples vs second half ripples
  • forward vs. reverse
  • inbound vs. outbound

Which frequencies should we focus on?

There are a number of frequencies that we could target with our spectral analysis:

  • Ripple frequency (150-250 Hz)
  • High Gamma (50-100 Hz)
  • Low Gamma (30-50 Hz)
  • Beta (12-30 Hz)
  • Theta/Alpha (4-12 Hz)

Loren thinks that Low Gamma and Theta are of more interest in a coherence analysis. He's worried that increased ripple frequency power simply reflects an increase in spiking from increased input to both areas.

He also suggested that we look at spectrograms in PFC and maybe use that to determine what frequencies to look at.

Reproduce analysis from Jadhav paper

In order to make sure I understand the data formatting and analysis in the original paper, need to reproduce at least one analysis from the Jadhav paper. Two candidate analyses are:

  • Figure 3 - show PFC activity locked to the time of the short wave ripple
  • Figure 6 - the GLM decoding of PFC spiking using the CA1 ensemble

Quantify behavior

The animals start with a novel track and then learn over days. Need to characterize:

  • time between reward wells for each session
  • amount rewarded for each session
  • how these change relative to inbound / outbound trials and over sessions/days

Should expect time between reward wells to get faster over sessions/days and amount rewarded to increase over sessions/days. Inbound trials should be faster than outbound trials.

Compute spectral granger causality

See:

Geweke, J. 1982 Measurement of linear dependence and feedback between multiple time series. Journal of the American Statistical Association 77, 304-313.

Better function documentation

  • data_processing.py
  • ripple_detection.py
  • spectral.py
  • ripple_decoding.py
  • analysis.py
  • test_data_processing.py
  • test_ripple_detection.py
  • test_spectral.py
  • test_ripple_decoding.py
  • test_analysis.py

DIO pulses are corrupted for a couple of animals

The DIO file has six channels corresponding to motion detection (the animal is near the reward well) or reward delivery for each of the three reward wells located at the end of each arm. So the animals' position (as marked by infrared diodes on top of the microdrive preamp) when these channels are active should correspond to the end of the arms where the reward well is located. However, this is not always the case:

Blue lines indicate the rat's position in the W-track. Red dots correspond to the rat's position when either motion detection or reward delivery is triggered

Blue lines indicate the rat's position in the W-track. Red dots correspond to the rat's position when either motion detection or reward delivery is triggered. Data is from rat 'HPa', day 8, epoch 2. See python notebook UnderstandingDIO for more plots and code.

I emailed Demetris and he confirmed there might be an issue:

Unfortunately you will have to ask shantanu about the other issues as I seem to remember him saying something about his dio pules being corrupted for a couple of the animals leading to the displaced position/time issue that you're seeing (so don't worry you're not crazy :)). By now, he may have manually corrected these pulses or reconstructed the dio-like info from the position or something.

I will email Shantanu and see if he has any solutions

Categorize ripple types

  • Inbound trials vs. Outbound trials
  • Prospective vs. Retrospective Replay
  • Right vs Left turn
  • Local vs. Remote environment (Remote = some other environment the animal has previously been in)

Use ripple detection method of Kay 2016?

Our version of their code used their description of ripple detection in
Karlsson, M.P., Frank, L.M., 2009. Awake replay of remote experiences in the hippocampus. Nature Neuroscience 12, 913–918. doi:10.1038/nn.2344:

SWRs were identified on the basis of peaks in the LFP recorded from one channel from each tetrode in the CA3 and CA1 cell layers. The raw LFP data were bandpass-filtered between 150–250 Hz, and the SWR envelope was determined using a Hilbert transform. The envelope was smoothed with a Gaussian (4-ms s.d.). We initially identified SWR events as sets of times when the smoothed envelope stayed above 3 s.d. of the mean for at least 15 ms on at least one tetrode. We defined the entire SWR as including times immediately before and after that threshold crossing event during which the envelope exceeded the mean. Overlapping SWRs were combined across tetrodes, so many events extended beyond the SWR seen on a single tetrode.

Which is what Jadhav 2016 uses as well. But a different detection method is presented in Kay, K., Sosa, M., Chung, J.E., Karlsson, M.P., Larkin, M.C., Frank, L.M., 2016. A hippocampal network for spatial coding during immobility and sleep. Nature 531, 185–190. doi:10.1038/nature17144:

Detection of SWRs was prerequisite for all data analysed in this study, and was performed only when at least three CA1 cell layer recordings were available. Offline, a multisite average approach was used to detect SWRs58. Specifically, LFPs from all available CA1 cell layer tetrodes were filtered between 150–250 Hz, then squared and summed across tetrodes. This sum was smoothed with a Gaussian kernel (σ = 4 ms) and the square root of the smoothed sum was analysed. SWRs were detected when the signal exceeded 2 s.d. of the recording epoch mean for at least 15 ms. SWR periods were then defined as the periods, containing the times of threshold crossing, in which the power trace exceeded the mean. SWR onset was defined as the start of a SWR period. Detection of SWRs was performed only when subjects’ head speed was <4 cm s−1. For SWR-triggered spike raster plots and PSTH plots, a 0.5 s exclusion period was imposed to isolate SWRs occurring only after non-SWR periods; otherwise, analyses of SWRs included all detected SWRs.

Having trouble finding all the LFP recordings

In the EEG folder, there are several types of files:

  • files that correspond to the raw LFP recordings ({animal}eeg{day}-{epoch}-{tetrode}).mat
  • files that correspond to the ground LFP recordings ({animal}eeggrnd{day}-{epoch}-{tetrode}.mat
  • files that correspond to the processed LFP in different frequency bands (e.g. {animal}delta{day}-{epoch}-{tetrode}

But there doesn't seem to be a file for each tetrode. I'm not sure why.

To further complicate things, there is an OriEEG folder, which I assumed meant "Original EEG" files, but seems to only include tetrodes 17 and above. These seem to be only PFC electrodes (at least for the first animal).

I'm going to try to:

  1. See if the Filter framework code gives any clues
  2. Maybe email Demetris from Loren's lab

Is the encoding model fitting properly?

Use occupancy normalized histograms overlayed on model fit
Maybe simulate data from the model and compare to the real data
Maybe compare to Xinyi’s data

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.