Git Product home page Git Product logo

eztrack's Introduction

Behavior Tracking with ezTrack

This page hosts iPython files that can be used to track the location, motion, and freezing of an animal. For the sake of clarity, these processes are described as two modules: one for tracking an animal's location; the other for the analysis of freezing. If you are unfamiliar with how to use iPython/Jupyter Notebook, please see Getting Started.

Examples

Please cite ezTrack if you use it in your research:

Pennington ZT, Dong Z, Feng Y, Vetere LM, Page-Harley L, Shuman T, Cai DJ (2019). ezTrack: An open-source video analysis pipeline for the investigation of animal behavior. Scientific Reports: 9(1): 19979

Check out the ezTrack wiki

For instructions on installation and use, go here.

New Feature Alerts:

  • 04/11/2021: ezTrack now has algorithm for removing wires in the location tracking module.
  • 07/20/2020: ezTrack now supports spatial downsampling of videos! You can reduce the resolution of the video to greatly speed processing. Processing high-definition videos on older laptops/desktops can be slow, but by downsampling, processing speeds are much faster.
  • 07/19/2020: Location tracking module now allows user to manually define frame numbers to be used when selecting reference. This is useful if baseline portion of video without animal will be used for reference, and resolves issue when alternative video being used for reference is a different length than the video being processed.
  • 06/16/2020: Location tracking module now allows user to define regions of frame that they would like excluded from the analysis. This is useful in situations where an extraneous object enters into periphery, or even center, of the field of view.

Location Tracking Module

The location tracking module allows for the analysis of a single animal's location on a frame by frame basis. In addition to providing the user the with the ability to crop the portion of the video frame in which the animal will be, it also allows the user to specify regions of interest (e.g. left and right sides) and provides tools to quantify the time spent in each region, as well as distance travelled.
schematic_lt

Freeze Analysis Module

The freeze analysis module allows the user to automatically score an animal's motion and freezing while in a conditioning chamber. It was designed with side-view recording in mind, and with the intention of being able to crop the top of a video frame to remove the influence of fiberoptic/miniscope cables. In the case where no cables are to be used, recording should be capable from above the animal.
schematic_fz

License

This project is licensed under GNU GPLv3.

eztrack's People

Contributors

denisecai avatar phildong avatar zachpenn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eztrack's Issues

Lag in location tracker in individual analysis video generated in step 10.

Hello
I've noticed there's a lag in the marker on the video generated in the individual tracker analysis step 10. The attached screenshot is an example of this where the animal has moved into a new region of interest but the marker in the video lag behind. Would the frames that have the animal in the next region but the lagging marker be counted as TRUE for the first region, or is the analysis more accurate?

This also could interfere with analyses of zones the animal circumnavigates but doesn't enter as the marker crosses the unentered zone as it catches up to the animal, second pic.
Screenshot 2021-04-15 154200

Typeerror when loading packages

Keep getting errors when loading the initial packages...

image

Getting this problem in V1.1 and in the version uploaded in May 2021. I've reinstalled everything from scratch and my eztrack conda environment is definitely active and includes holoviews and bokeh...

Frames calculated by ezTrack

Hello, I've been trying to run behavioral videos on ezTrack and have been using the frame rate to calculate the start/end frames. One thing I've noticed is that when I set the end frame to None and then run step 3, the total frames calculated is much higher than what I'm seeing it is by getting the total frames of the video in Matlab. Also the 'nominal fps' is different from the frame rate I see on my computer of the video. What would this difference mean in terms of analyzing the video? If I put an end frame of 15000 but ezTrack is calculated the total frames to be 600,000+ would this mean it's not actually analyzing the full video? I would appreciate any insight you have on this, thank you!

frames do not match

Hi,

I'll get straight into my question:

Part 3 of FreezeAnalysis_Individual tells me that my video has about 61,000 frames which matches what I calculated by hand (60 FPS, ~17 min video).

I had set part 2 to analyze my entire video by having the 'end' be None. However, the x axis' on the motion graphs displayed on parts 4b and 5b only go a little further than 30,000 frames. I'm confused because this means I'm missing about 30,000 frames of my video.

Throughout my video, there are 6 shocks administered. What adds to my confusion is that I can see the points at which the shock was administered ( I see 6 huge bursts of motion) even though there are about 30,000 missing frames.

I want to be able to create a binned summary of the animals activity 30 seconds before they receive the shock, but I am having trouble because my idea of when the shocks were administered based on the original video and what is being displayed on motion graphs does not match in terms of frames.

I am new to python and loving ezTrack so far! Any help would be greatly appreciated!

Thank you!!

Failure in LocationTracking_BatchProcess

I'm trying to use the Location tracking Batch process in my Open Field videos, but received a "Type error".
All the files are in the same folder and have the same properties (FPS, camera position, etc).
The code run without problem until step 7, when it process the first video, but fail to analyze the next and gives me the following error message.

I will send prints of all my cells, just in case.

I'm a beginner in this Python world, so Thanks so much for any help.
cell 1
cell 2 and files
files
cell 3
cell 4
cell 5
cell 6a
cell 6b e 7
cell 7
cell 7 error

Extract and verify Reference images in batch

I've found myself in the unfortunate need to use "custom" settings to extract the Reference image for a few videos out for which it wasn't possible to use the default settings, a static image, or the same np.arange() for all videos to be processed in a batch. I've come across #16 and I agree the arguments laid there--mostly that this isn't really a batch process if one has to customize to some extent--but I remain attracted by the prospect of being able to get all the interactive and customization work done before proceeding with the location tracking step.
Indeed, I'm working with some rather lengthy videos at times, and being able to do this "prep" work before-hand would allow me to get everything ready before starting the Batch_Process() (this is for location tracking) and let it run, letting me work on something else in the meantime. If I were to process these individually, I would have some akward few minutes of wait times between each video (long enough to be wasting your time, but too short to be doing something else).

I thus tried to adapt the functions in LocationTracking_Functions.py to batch extract and test the reference image and allow me to feed a custom dict of np.arange for each video that would require it during the Batch_Process() step. The resulting workflow is thus:

  1. Extract reference image for all videos in batch using Reference()'s default settings:
  2. Set custom np.arange() for those videos that would require it in a dedicated frames_dict.
  3. Repeat steps 1-2 until a satisfactory Reference image is extracted consistently.
  4. Run Batch_Process() with this custom frames_dict.

I acknowledge that this fits my own needs and does not necessarily fit into your "vision" for a canonical ezTrack pipeline, but I thought I would share and see if there's any interest in getting this included. If interested, the overall "idea" can be found at the branch below:
https://github.com/florianduclot/ezTrack/tree/WIP_Batch_Reference

Problem in defining the ROI

Hello,
I am trying to define the ROI in step number 6. When I try to double-click on the image, the blue dot doesn't show on the image as shown in the attached picture. Do you have any suggestion about this problem? Thanks in advance for any help that you are able to provide.
Picture1

Batch processing capability

Hello, I have been using the location tracking for individual files successfully, but was interested in using batch processing on sets of videos that would have the same ROIs and masked regions since the camera was set up exactly the same for all videos. However, the frame rates for each video and starting/ending frames are slightly different. Is there any way I can add in a vector of start and end frames in the same order that the videos are organized in the folder I'm trying to run? This would be really helpful since each video has a different time the animal was put into the frame.
I would appreciate any thoughts you may have on this - thank you!

How to not consider a big clip in my cable (smaller than the mice)

mouse-top
Hi,

I have some weight compensation mechanism which pull the cable at a close distance from the mouse, and then appears on top videos as a big black point which is detected by ezTrack. It then mess up with COM detection. Side videos are perfectly fine, but I was wondering if you could think of a particular section in the code that I could look at to discard any "mass" that is smaller that a certain extent ? I guess it could be tricky to disambiguate two different "blobs" of pixel changes, especially it they come close or overlapping.
Any thought about how to solve my issue ?

Thank you very much in advance,

Julien

mouse-top-tracking

First cell fails to run/load packages

I had to reinstall all programs on my computer including everything python related and I am not having issues running the first cell on locationtracking_batch process.

This is the error message.
image

Things I have tried:

  • I have removed the ezTrack environment multiple times (conda env remove -n ezTrack), recreating the environment using "conda create -y -n ezTrack -c conda-forge python=3.8 pandas=1.3.2 matplotlib=3.1.1 opencv=4.5.3 jupyter=1.0.0 holoviews=1.14.5 scipy=1.7.1 scikit-learn=0.24.2 bokeh=2.3.3 jinja2=3.0.3 tqdm" (exact cmd copied from my anaconda terminal.). I still get the same error.
  • Creating environment with scipy=1.7.3
  • Rolling back numpy to 1.22.3, 1.21.5. (creates issues with holoviews)
  • Varying combinations of different pandas, scipy, numpy, and holoviews versions
  • About 10 different program rollbacks as more things became incompatible with each other but I did not find a working combination

This issue began a month ago and is still continuing. I'm using win64 & chrome.

Here is conda list using exact environment create from installation instructions:

image
image
image

Thanks!

Program Crash

Hi,
When I am using the location tracking function (individual) when I get to 7c the page crashes and I am not sure why.

ROI

Hello,

I managed to used this code to track the animal in the whole box.
However, when I draw ROIs, I have a KeyError.
I named my ROIs: 'region_names' : ["start","middle","reward"], and I drew them.
But compared to the video explaining ezsTrack, when I draw my region, the name is not written inside my square.

This is the error that I get when I run the cell:

 location = lt.TrackLocation(video_dict, tracking_params)
 location.to_csv(os.path.splitext(video_dict['fpath'])[0] + '_LocationOutput.csv', index=False)
 location.head()

KeyError Traceback (most recent call last)
in
----> 1 location = lt.TrackLocation(video_dict, tracking_params)
2 location.to_csv(os.path.splitext(video_dict['fpath'])[0] + '_LocationOutput.csv', index=False)
3 location.head()

~\ezTrack\LocationTracking_Functions.py in TrackLocation(video_dict, tracking_params)
784
785 #add region of interest info
--> 786 df = ROI_Location(video_dict, df)
787 if video_dict['region_names'] is not None:
788 print('Defining transitions...')

~\ezTrack\LocationTracking_Functions.py in ROI_Location(video_dict, location)
1148 #Create ROI Masks
1149 ROI_masks = {}
-> 1150 for poly in range(len(video_dict['roi_stream'].data['xs'])):
1151 x = np.array(video_dict['roi_stream'].data['xs'][poly]) #x coordinates
1152 y = np.array(video_dict['roi_stream'].data['ys'][poly]) #y coordinates

KeyError: 'roi_stream'

Step 3, Output image is not displaying

Hello!
I have a question regarding step 3's output. I run step 3 and I got a text output corresponding to the video I wanted to process, but there was no image output that I could then crop. Additionally, when I try to run step 4, the same issue occurs, there is no image output. Any advice how to correct this?

Thank you in advance!
Screen Shot 2022-05-16 at 11 58 18 AM

Step 8c Stalling Before Completion

I have tried two different videos and each of them stopped before reaching 100% in Step 8c. I am not sure what is wrong. One stopped at 77% and the other at 68%. The first times it did it I was able to see the first 5 rows of tracking data even though it had stalled, but this most recent time it gave an error message. I am new to python/jupyter notebook/etc.

location = lt.TrackLocation(video_dict, tracking_params)
location.to_csv(os.path.splitext(video_dict['fpath'])[0] + '_LocationOutput.csv', index=False)
location.head()
68%|██████████████████████████████████████████████████████████████████▌ | 38655/56961 [59:47<28:18, 10.78it/s]
total frames processed: 38654


KeyError Traceback (most recent call last)
in
----> 1 location = lt.TrackLocation(video_dict, tracking_params)
2 location.to_csv(os.path.splitext(video_dict['fpath'])[0] + '_LocationOutput.csv', index=False)
3 location.head()

~\Downloads\ezTrack-master\ezTrack-master\LocationTracking_Functions.py in TrackLocation(video_dict, tracking_params)
765
766 #add region of interest info
--> 767 df = ROI_Location(video_dict, df)
768
769 #update scale, if known

~\Downloads\ezTrack-master\ezTrack-master\LocationTracking_Functions.py in ROI_Location(video_dict, location)
1125 #Create ROI Masks
1126 ROI_masks = {}
-> 1127 for poly in range(len(video_dict['roi_stream'].data['xs'])):
1128 x = np.array(video_dict['roi_stream'].data['xs'][poly]) #x coordinates
1129 y = np.array(video_dict['roi_stream'].data['ys'][poly]) #y coordinates

KeyError: 'roi_stream'

ezTrack LocationTracking_Individual Step 5

Hello,

Across multiple OS's, step 5 of the location tracking module seems to be impassible. I'm not sure if this is simply a problem with the version of modules I have installed, and if so, is there a location that module versions used for the original implementation can be found?

Thanks in advance for your advice.

LocationTracking_Individual: problem when changing tracking parameters

Hi,

This tool is proving to be invaluable for me, so thanks!

I have been having success with this code so far but have just encountered a problem. For some of my videos I would like to change the 'method' in 'tracking_params' (section 7a. Set Location Tracking Parameters) from 'abs' to 'dark'. I notice that for some videos the display examples in 7b are more accurate when I use 'dark' instead of 'abs', hence I would like to change from the default tracking parameters.

In case it is relevant, I split the video being analysed into three areas ('Left', 'Centre', 'Right') and am interested in determining the relative time spent in each. However, when I run the code, the summary stats for the 'Centre' and 'Right' columns have the value '0', with only the 'Left' column having a value (e.g. 0.4132).

I was wondering if you could shed any light on this problem?

Thanks in advance,

Luke

The third step encountered some problems

Thank you very much for the quick tool you have developed, which brings great convenience to those who need basic scientific research, especially for people like me who have basically no programming experience. But I encountered a small problem in the process of using, I don't know how to deal with it, it makes me a little confused, so I ask you for help, I hope to get your answer
image

Freeze Analysis Save Video

Hello!

I'm using the Freeze Analysis module and, although all aspects of analysis are working for me as far as I can tell, I'm having difficulty with item number 6 that gives the option to display the specified frames as well as save the video, which I'd like to use for a presentation.

I see a video exported in the folder as 'video_output.avi' that is only 6 kb and I can't open it in VLC or windows media player. I'm using .mp4 videos as my source video files.

I've been looking at FreezeAnalysis_Functions.py and it seems like the file may be getting initialized but not written- any thoughts on that? I don't have much experience with the cv2 module.

Limiting frame number

Hi DeniseCaiLab,

Thank you for making such a great tool. I am brand new to all sorts of programming and analyses but have found this pretty easy to use. The only issue I am having though is that only 26 frames are being analyzed from my video which is around 9000 frames. Do you have any ideas as to what could be going wrong? I don't see where I could insert in how many frames I want tracking to be done on.

Many, many thanks,
Cassandra

The issues in the LocationTracking_Functions.py?

when i run the 1. Load Necessary Packages of LocationTracking_Individual.ipynb, there are something wrong,

WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.Dimension: Use method 'get_param_values' via param namespace
WARNING:param.ParameterizedMetaclass: Use method 'params' via param namespace

ImportError when importing LocationTracking_Functions

#Hi, ezTrack is a pretty cool and straightforward tool.
#we can successfully install it when we follow the README file. but when we tried to run the code in jupyter notebook, we came across the error below:

ImportError Traceback (most recent call last)
in
6 import matplotlib.pyplot as plt
7 import pandas as pd
----> 8 import LocationTracking_Functions as lt
9 import holoviews as hv

~\Downloads\ezTrack-1.1\LocationTracking_Functions.py in
42 from holoviews import streams
43 from holoviews.streams import Stream, param
---> 44 hv.notebook_extension('bokeh')
45 warnings.filterwarnings("ignore")
46

~\anaconda3\envs\ezTrack\lib\site-packages\param\parameterized.py in new(class_, *args, **params)
3098 inst = class_.instance()
3099 inst.param.set_name(class.name)
-> 3100 return inst.call(*args,**params)
3101
3102 def call(self,*args,**kw):

~\anaconda3\envs\ezTrack\lib\site-packages\holoviews\ipython_init_.py in call(self, *args, **params)
112
113 def call(self, *args, **params):
--> 114 super(notebook_extension, self).call(*args, **params)
115 # Abort if IPython not found
116 try:

~\anaconda3\envs\ezTrack\lib\site-packages\holoviews\util_init_.py in call(self, *args, **params)
703
704 if selected_backend is None:
--> 705 raise ImportError('None of the backends could be imported')
706 Store.set_current_backend(selected_backend)
707

ImportError: None of the backends could be imported

And the LocationTracking_Functions.py is in the same folder as LocationTracking_Individual.ipynb
#any suggestion? Thank you.

Reanalyzing motion data for freezing?

Are there functions for reanalyzing the motion data for freezing using a different threshold without having to reanalyze the entire video files? I may have missed it, but I looked through your code and paper but didn't see anything about this.

My current problem is that I am trying find an optimal threshold for calculating freezing by comparing the output percentages by ezTrack to our current freezing system, but this will take way too long if I have to reprocess my video files every single time I want to tweak the threshold (I'm working with 45 min videos which are taking roughly 2 hours each to process). Instead, it would be nice if I only have to process the videos once and then be able to take the raw "motion" values from the FreezingOutput.csv and simply reanalyze those with a new threshold (which should only take seconds instead of hours).

I know that you currently have the "Individual" Jupyter notebook for optimizing parameters, but essentially it seems to make sense to me that steps 5 and 7 from this notebook should be able to be easily computed without actually reanalyzing the video files. I'm likely going to write my own code for this at the moment to save me a headache, but I think this would be a very useful tool for you guys to add for others in the future.

1 second bins in freezing module

Hi there! If I want the program to output freezing data in 1 second bins for a 45 minute session, is there a better way to do this than manually writing in the frames I need in the summary report module? That would lead to 81,000 lines of code lol!

only the first Holoviews object is displaying in LocationTracking_Individual

When running through LocationTracking_Individual the first holoview object from step 3 "Load Video and Crop Frame" displays properly, but no image is displayed for step 4 "Defining Reference Frame for Location Tracking" and neither does the interactive object for setting the scale in step 6a "Define Scale for Distance Calculations"

video_dict issues when calling functions out of jupyter code

Hi everyone from @denisecailab,

Your tool is really interesting and quite useful, congratulations!

I have started using it to automate some of the tracking tasks we have in our lab and I noticed there are a few issues when calling ezTrack functions on scripts out of the notebooks you provide. I don't have a detailed description of the issues as of now, because I just debugged it quickly to have some results. However, if you want to start working on it before I give more details, below are some information:

  1. please take a look operations with video_dict['dpath'] and video_dict['file'] as I had some issues while communitacing between functions. For example, trackLocation should use os.path.join(os.path.normpath(video_dict['dpath']), video_dict['file']) instead of video_dict['dpath']) to load the image. Also, please let me know if you really have the same issues or I am the only facing these issues.

  2. batch_LoadFiles does not work in the notebook. it gives an empty list when using the LocationTracking_BatchProcess.ipynb, key name 'ftype' value should be with a dot e.g,'.avi'

I hope you find these comments useful.
Cheers,

No display of the Examples of Location Tracking

Hi @denisecailab ,

Thanks for this amazing tool!

I used it successfully for different videos, however, I am now analyzing "long" videos (1h50min) and I have no longer the display of the Examples of Location Tracking (step 7b).
When I execute step 7b, I have "Wall time: 218 ms" printing out, but no graphs with the tracking examples. There is no errors and I can then still analyze properly the entire video and display the Distance/Location.
Do you have any idea why this is happening?

Thanks in advance for your help!
Damien

Glitch for using reference frames (Option 2)

The Reference function, which generates the reference frame, uses the same video_dict defined for the video to be analyzed. The 'start' and 'end' variables for the 'altfile' are the same as the 'start' and 'end' values for the file to be analyzed. This means that if the value of 'start' is higher than the number of frames in the reference video (as it is in our case), the code does not manage to find frames in the reference video and generates an error message. Is there any chance to create separate 'start' and 'end' variables for the video to be analyzed and the reference video? Do you think it would be appropriate to add to the Jupyter Notebook a couple of lines to create the reference video using a subset of frames in the video to be analyzed, without having to use a different software? If not, we can modify the code. I was not sure whether this could be useful to the broader community. Thanks!

Issue - Defining Regions of Interest

Hi there! Hope you're doing well.

When I try to define the regions of interest in my cropped frame (specifically I'd like to divide the frame in two rectangles), I get a "No Regions to Draw" message. I try to draw but I can't (there's not even a box with a plus sign).

Just wanted to know how to fix this, if even possible! Let me know if you need more information. I have attached a picture of the issue.

Best,
Sofía
Screen Shot 2021-10-14 at 6 01 38 PM

LocationTracking_BatchProcess bug

I used the LocationTracking_BatchProcess scrip to handle my videos.It worked well with no Defined ROI but got an error for processing more than 8 videos with Defined ROI.Here are part of the error:
KeyError Traceback (most recent call last)
~\anaconda3\envs\ezTrack\lib\site-packages\pandas\core\indexes\base.py in get_value(self, series, key)
3102 return self._engine.get_value(s, k,
-> 3103 tz=getattr(series.dtype, 'tz', None))
3104 except KeyError as e1:
IndexError: index out of bounds
Hope it will be fixed,thanks!

issue with exporting .png pictures

Hello, I am trying to use the save feature to download the .png images from all steps in the code, however it is being cropped when it's saved to my Downloads. I've attached some examples. Is there anything I can do to fix this? Thank you!
7 1Step8b(2)
7 1Step3
7 1Step8d(1)

Key Error

Hello again!

Thanks for being so punctual with your replies!

I had another question. When trying to define ROI and when trying to define scale, I am met with an error:

KeyError Traceback (most recent call last)
in
----> 1 img_scl, video_dict['scale'] = lt.DistanceTool(video_dict)
2 img_scl

~/Various Documents/Vole Research Stuff/ezTrack-master/LocationTracking/LocationTracking_Functions.py in DistanceTool(video_dict)
2212 #Make reference image the base image on which to draw
2213 image = hv.Image((
-> 2214 np.arange(video_dict['reference'].shape[1]),
2215 np.arange(video_dict['reference'].shape[0]),
2216 video_dict['reference']))

KeyError: 'reference'

How might I resolve this?

Thank you again in advance!

Screen Shot 2022-05-16 at 7 00 23 PM

Screen Shot 2022-05-16 at 7 00 33 PM

Failure to install necessary packages

When attempting to install the necessary packages,

  1. conda config --add channels conda-forge
  2. conda create -n ezTrack python=3.6.5 pandas=0.23.0 matplotlib=2.2.2 opencv=3.4.3 jupyter holoviews scipy

I get the following error message after running step 2

The following NEW packages will be INSTALLED:

appnope:            0.1.0-py36_1000                      conda-forge
attrs:              19.1.0-py_0                          conda-forge
backcall:           0.1.0-py_0                           conda-forge
blas:               1.1-openblas                         conda-forge
bleach:             3.1.0-py_0                           conda-forge
bokeh:              1.1.0-py36_0                         conda-forge
bzip2:              1.0.6-h1de35cc_1002                  conda-forge
ca-certificates:    2019.3.9-hecc5488_0                  conda-forge
cairo:              1.14.12-h9d4d9ac_1005                conda-forge
certifi:            2019.3.9-py36_0                      conda-forge
cycler:             0.10.0-py_1                          conda-forge
decorator:          4.4.0-py_0                           conda-forge
defusedxml:         0.5.0-py_1                           conda-forge
entrypoints:        0.3-py36_1000                        conda-forge
ffmpeg:             4.0.2-ha0c5888_2                     conda-forge
fontconfig:         2.13.1-h1027ab8_1000                 conda-forge
freetype:           2.10.0-h24853df_0                    conda-forge
gettext:            0.19.8.1-h46ab8bc_1002               conda-forge
giflib:             5.1.7-h01d97ff_1                     conda-forge
glib:               2.56.2-h67dad55_1001                 conda-forge
gmp:                6.1.2-h0a44026_1000                  conda-forge
gnutls:             3.5.19-h2a4e5f8_1                    conda-forge
graphite2:          1.3.13-h2098e52_1000                 conda-forge
harfbuzz:           1.9.0-h9889186_1001                  conda-forge
hdf5:               1.10.3-hfa1e0ec_1001                 conda-forge
holoviews:          1.12.1-py_2                          conda-forge
icu:                58.2-h0a44026_1000                   conda-forge
ipykernel:          5.1.0-py36h24bf2e0_1002              conda-forge
ipython:            7.5.0-py36h24bf2e0_0                 conda-forge
ipython_genutils:   0.2.0-py_1                           conda-forge
ipywidgets:         7.4.2-py_0                           conda-forge
jasper:             1.900.1-h636a363_1006                conda-forge
jedi:               0.13.3-py36_0                        conda-forge
jinja2:             2.10.1-py_0                          conda-forge
jpeg:               9c-h1de35cc_1001                     conda-forge
jsonschema:         3.0.1-py36_0                         conda-forge
jupyter:            1.0.0-py_2                           conda-forge
jupyter_client:     5.2.4-py_3                           conda-forge
jupyter_console:    6.0.0-py_0                           conda-forge
jupyter_core:       4.4.0-py_0                           conda-forge
kiwisolver:         1.1.0-py36h770b8ee_0                 conda-forge
libblas:            3.8.0-5_hd44dcd8_netlib              conda-forge
libcblas:           3.8.0-5_hd44dcd8_netlib              conda-forge
libcxx:             8.0.0-2                              conda-forge
libcxxabi:          8.0.0-2                              conda-forge
libffi:             3.2.1-h6de7cb9_1006                  conda-forge
libgfortran:        3.0.1-0                              conda-forge
libiconv:           1.15-h01d97ff_1005                   conda-forge
liblapack:          3.8.0-5_hd44dcd8_netlib              conda-forge
libpng:             1.6.37-h2573ce8_0                    conda-forge
libsodium:          1.0.16-h1de35cc_1001                 conda-forge
libtiff:            4.0.10-h79f4b77_1001                 conda-forge
libwebp:            0.5.2-7                              conda-forge
libxml2:            2.9.9-hd80cff7_0                     conda-forge
markupsafe:         1.1.1-py36h1de35cc_0                 conda-forge
matplotlib:         2.2.2-py36hbf02d85_2                            
mistune:            0.8.4-py36h1de35cc_1000              conda-forge
nbconvert:          5.5.0-py_0                           conda-forge
nbformat:           4.4.0-py_1                           conda-forge
ncurses:            6.1-h0a44026_1002                    conda-forge
nettle:             3.3-0                                conda-forge
notebook:           5.7.8-py36_0                         conda-forge
numpy:              1.15.2-py36_blas_openblashd3ea46f_1  conda-forge [blas_openblas]
olefile:            0.46-py_0                            conda-forge
openblas:           0.2.20-8                             conda-forge
opencv:             3.4.3-py36_blas_openblash5e3fa27_201 conda-forge [blas_openblas]
openh264:           1.8.0-hd9629dc_1000                  conda-forge
openssl:            1.0.2r-h1de35cc_0                    conda-forge
packaging:          19.0-py_0                            conda-forge
pandas:             0.23.0-py36_1                        conda-forge
pandoc:             2.7.2-0                              conda-forge
pandocfilters:      1.4.2-py_1                           conda-forge
param:              1.9.0-py_0                           conda-forge
parso:              0.4.0-py_0                           conda-forge
pcre:               8.41-h0a44026_1003                   conda-forge
pexpect:            4.7.0-py36_0                         conda-forge
pickleshare:        0.7.5-py36_1000                      conda-forge
pillow:             6.0.0-py36h7095ceb_0                 conda-forge
pip:                19.1-py36_0                          conda-forge
pixman:             0.34.0-h1de35cc_1003                 conda-forge
prometheus_client:  0.6.0-py_0                           conda-forge
prompt_toolkit:     2.0.9-py_0                           conda-forge
ptyprocess:         0.6.0-py_1001                        conda-forge
pygments:           2.3.1-py_0                           conda-forge
pyparsing:          2.4.0-py_0                           conda-forge
pyqt:               5.6.0-py36hc26a216_1008              conda-forge
pyrsistent:         0.15.1-py36h01d97ff_0                conda-forge
python:             3.6.5-1                              conda-forge
python-dateutil:    2.8.0-py_0                           conda-forge
pytz:               2019.1-py_0                          conda-forge
pyviz_comms:        0.7.2-py_0                           conda-forge
pyyaml:             5.1-py36h1de35cc_0                   conda-forge
pyzmq:              18.0.1-py36h2d07e9b_1                conda-forge
qt:                 5.6.2-h9e3eb04_4                     conda-forge
qtconsole:          4.4.3-py_0                           conda-forge
readline:           7.0-hcfe32e1_1001                    conda-forge
scipy:              1.2.1-py36hbd7caa9_1                 conda-forge
send2trash:         1.5.0-py_0                           conda-forge
setuptools:         41.0.1-py36_0                        conda-forge
sip:                4.18.1-py36h0a44026_1000             conda-forge
six:                1.12.0-py36_1000                     conda-forge
sqlite:             3.20.1-0                             conda-forge
terminado:          0.8.2-py36_0                         conda-forge
testpath:           0.4.2-py_1001                        conda-forge
tk:                 8.6.9-ha441bb4_1001                  conda-forge
tornado:            6.0.2-py36h01d97ff_0                 conda-forge
traitlets:          4.3.2-py36_1000                      conda-forge
wcwidth:            0.1.7-py_1                           conda-forge
webencodings:       0.5.1-py_1                           conda-forge
wheel:              0.33.1-py36_0                        conda-forge
widgetsnbextension: 3.4.2-py36_1000                      conda-forge
x264:               1!152.20180806-h1de35cc_0            conda-forge
xz:                 5.2.4-h1de35cc_1001                  conda-forge
yaml:               0.1.7-h1de35cc_1001                  conda-forge
zeromq:             4.3.1-h0a44026_1000                  conda-forge
zlib:               1.2.11-h1de35cc_1004                 conda-forge
Proceed ([y]/n)? y

Preparing transaction: done
Verifying transaction: done
Executing transaction: failed
ERROR conda.core.link:_execute(507): An error occurred while installing package 'conda-forge::attrs-19.1.0-py_0'.
FileNotFoundError(2, "No such file or directory: '/Users/home/anaconda3/envs/ezTrack/bin/python3.6'")
Attempting to roll back.


Rolling back transaction: done

When I check for the missing directory i find '/Users/home/anaconda3/envs/ezTrack/conda-meta', but not '/bin/python3.6'. Any idea what the issue might be? I'm looking forward to testing out this software!

When I changes the folder that I saved freezing videos, the script shows "not found".

Hi,

For a while, after I reinstalled the new version of ezTrack, I didn't have any problem with finding freezing video files.
However, when I changed the folder (path) to analyze other video files, the script didn't read them.

Screen Shot 2021-12-22 at 12 20 14 PM

In the screenshot,
"/Users/sungmo/Dropbox/tracking_in_python/original_test_video_CT2/ADFP2/m1_behaviour.avi" is a new path.
The original one was "/Users/sungmo/Dropbox/tracking_in_python/original_test_video_CT1/ADFP2/m1_behaviour.avi".

I only changed the folder to another.

Could you help me out?
Thank you for your time and attention.

Best,

Package Loading issue in Jupyter Notebook

Hi there!

I've gotten the EZtrack Jupyter Notebook to work in the past and even analyzed videos but I'm running into a bit of an error now. After setting up the environment, downloading the zip, and opening jupyter notebook, I try to run the first step on the "LocationTracking_Individual.ipynb" file. When I run the cell containing all the necessary packages, it returns a message saying "WARNING:param.Dimension: Use method 'get_param_values' via param namespace" about 50 times, followed by "WARNING:param.ParameterizedMetaclass: Use method 'params' via param namespace" another 50 times, and then "WARNING:param.BokehRenderer: Use method 'params' via param namespace
WARNING:param.NotebookArchive: Use method 'params' via param namespace"
I've never run into this before and I thought I might ignore it and continue with my video analysis but it seems to affect step 3. When I try to run the Load Video and Crop Frame step, this pops up:
Step 3 Error Message
Step 3 Error Message Pt2
I'm not sure if the video is potentially too long or there is some other issue going on. Help would be much appreciated! Thank you!

Cropping and ROI

Thank you for creating this software, it is really helpful. I am currently using it to track the distance travelled of mice in an open field. When I use the program to measure the total distance, it works great on a video that only contains the box.

However, I have tried using this to crop a video where there is movement outside the box (step #3) and I ran into issues. Everything appears to be working correctly, however, after selecting the region to crop, it's not reflected in the next steps. And if I don't crop the video, I get interference with where the centre of mass is for the animal.
Load and crop

I'm also getting the same problem when I try to select an ROI - I am able to select the region but this isn't remembered.
ROI

I am guessing there is a step that I am missing, but I can't work out what this is. I have been through the detailed protocol and still can't find where I am going wrong.

I hope that I can get some help on this issue. Thank you in advance.

Remove wire freezing analysis

Hello,

I am doing freezing analysis of animals with bilateral optogenetics implants.
The system is catching as movement when the cables are moving even though the animal is freezing.

Is there a way to remove the wires in the analysis like it is done for the tracking?

Thanks
Léa

Be able to save motion trace?

I really love the program, it has made analyze data in my lab super easy. However, I would love it even more if there was the option to save the motion trace that the program outputs. Also to improve on that further a heat map or something like that for where the animal was the most would be really cool.

installing in Ubuntu 20.04

I'm having issues with holoviews/bokeh under Ubuntu 20.04. In particular, I'm not able to get any dynamic output in my notebooks. Anyone solve this successfully?

Error Loading Packages in Individual Location Tracking

Hi,

When I run Step 1 of the Location Tracking Individual Module, I receive the following error:

Import Error

If relevant, the ezTrack conda environment I installed is as follows:
conda create -y -n ezTrack -c conda-forge python=3.6 pandas=0.23.0 matplotlib=3.1.1 opencv=3.4.3 jupyter=1.0.0 holoviews=1.12.3 scipy=1.2.1

I'm brand new to Python (and programming as a whole) and am unsure how to proceed - any advice would be greatly appreciated.

Thank you so much!

Failure to Load Necessary Packages

Hi,

When I am tried to load necessary packages I got the error below:

cannot import name 'Markup' from 'jinja2' (/Users/user/opt/miniconda3/envs/ezTrack/lib/python3.8/site-packages/jinja2/init.py)

Could you help me out from this problem showing "File not found"

Hi,

I am a very beginner for Python.
I have tried to analyze my mouse contextual test video files.
In my hand, I can select region of interest.
Like this,
Screen Shot 2021-12-02 at 4 55 51 PM
.

However, when I went to the next step, I got an error like this.
Screen Shot 2021-12-02 at 4 56 04 PM

I don't know how to fix this issue. I set the path of my files and directories.
Screen Shot 2021-12-02 at 4 55 36 PM

Could you help me out for this problem?
Thank you for your time and attention.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.