Git Product home page Git Product logo

dannce's People

Contributors

davidhildebrand avatar dependabot[bot] avatar diegoaldarondo avatar ksseverson57 avatar selmaan avatar spoonsso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dannce's Issues

clean up camera calibration scripts

@jessedmarshall
It looks like there are multiple copies of multi-camera-calibration, one in the base folder, and one in calibration. Can you verify that one of the copies can be deleted, and update any documentation in the README that might refer to the locations of these files.

integrate all aws commands into generate_labels.py

I think it will be easier for the user, at least as an option, to collect all pre-labeling steps, including the aws commands, into one script, and all of the post-labeling steps into another script. I.e.,

prelabling.py
Hand-labeling steps 1-4 in current README

postlabeling.py
Steps 5-7

.yaml configuration files

Can we switch configuration files to yaml format?

The current cfg reading process splits variables and values with .split(':') which messes with windows drive paths like C:\path\to\data.
https://github.com/spoonsso/DANNCE/blob/836745e5e7e4e5d8c9a4ba22b6bb13c224cac2c3/dannce/engine/processing.py#L266

I've written around it for now for @wlwang20 's implementation by changing the ':' to '*', but it would be nice to have a consistent version.
This is handled nicely in yaml, and the reading/writing functions are python 3.x built-ins.

Error Running DANNCE with Three Cameras

Hello! I am currently trying to execute DANNCE with the videos from three cameras. The cameras are already calibrated. I have also created a folder that contains the COM and DANNCE folders from markerless_mouse_1, the videos folder that is formatted in the layout indicated in the DANNCE READ.me file and contains the videos from all three cameras, and the label3d_dannce.mat file in the demo folder in DANNCE.

When I input the command to run DANNCE in the command window, the program cannot be completely executed because the following message appears: ValueError: generator already executing. I was looking through the code in the command window that led up to this error, and the section pasted below caught my attention.

Do you have any advice on how I can successfully run DANNCE with three cameras? Thank you!

Code pasted from Command Window:

Initializing Network...
Loading model from .\DANNCE\train_results\AVG\weights.1200-12.77642.hdf5
max
2250
Predicting on batch 0
c:\users\verpeutlab\desktop\dannce\dannce\engine\generator.py:975: UserWarning: Note: ignoring dimension mismatch in 3D labels
warnings.warn(msg)
Loading new video: videos\Camera1\0.mp4 for 0_Camera1
Loading new video: videos\Camera2\0.mp4 for 0_Camera2
Loading new video: videos\Camera1\0.mp4 for 0_Camera1
Loading new video: videos\Camera3\0.mp4 for 0_Camera3
Loading new video: videos\Camera2\0.mp4 for 0_Camera2
Loading new video: videos\Camera3\0.mp4 for 0_Camera3
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: block: [1289,0,0], thread: [8,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: block: [1289,0,0], thread: [9,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: block: [1289,0,0], thread: [10,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.

Pre-Labeled Data

Hello, my team has been working with Deeplabcut pose estimation, and we have a lot of data already labeled in their format. We want to try DANNCE and were wondering if it would be possible to convert the data we have into a format that would be compatible.

The data is in two files: a .csv and a .h5. The .csv is formatted like this:

"scorer", [Name], [Name], [Name], [Name],....
"individuals", [Animal1], [Animal1], [Animal1], [Animal1],...,[Animal2], [Animal2], [Animal2], [Animal2],....
"bodyparts", [Bodypart1], [Bodypart1], [Bodypart2], [Bodypart2],....
"coords", "x", "y", "x", "y",....
[Frame1], [X-coord], [Y-coord], [X-coord], [Y-coord],....
[Frame2], [X-coord], [Y-coord], [X-coord], [Y-coord],....
....

The .h5 is formatted like this:

"scorer", [Name]
"individuals", [Animal1] [Animal2]*
"bodyparts", [Bodypart1], [Bodypart2]
"coords", "x", "y", "x", "y",....
[Frame1], [X-coord], [Y-coord], [X-coord], [Y-coord],....

User-Specific Drives and Files in CAPTURE_demo Code

I was going through the CAPTURE_demo code, and I noticed that a few variables were defined by files that were obtained from user-specific paths (Jesse). If you could either update these variables with files that can be accessed by any user or tell me how to generate these files using either demo or experimental data, I would really appreciate it. I am not going to be using birds, so if you do not update the variables that are defined by user-specific paths in the files for analyzing birds, that is fine. Below is a list of the scripts that contain paths to user-specific paths. Each script is classified under the CAPTURE_demo folder it is in.

Animating:
dannce_reprojection_sbys_bird_demo.m
plot_frame_dannce_reprojection_multi_bird_demo.m

Behavioral_analysis:
get_supervised_features_demo.m
demoreembed.m

Preprocessing:
preprocess_dannce.m
In preprocess_dannce.m, a user-specific pathway loads a predictions.mat file. Should this be changed to the pathway that leads to the predictions.mat file generated by either the demo or an experiment?

Utlity:
load_bird_anglestruct.m
load_link_files.m
The user-specific pathway in load_link_files.m is under the case for birds, so if you don't update it, that is fine.

VideoAnalysis:
demo_readinimageframes.m

KeyError: 'com' during dannce-train

Hi,
Upon starting dannce train we get the following error. The com prediction completed successfully and the com3d file has been generated in the ./COM/predict_results

(dannce) E:\DANNCE_test_210608>dannce-train C:\Users\realtime\dannce\configs\dannce_mouse_config.yaml
2021-07-14 14:00:37.660334: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
io_config not found in io.yaml file, falling back to main config
new_n_channels_out not found in io.yaml file, falling back to main config
batch_size not found in io.yaml file, falling back to main config
epochs not found in io.yaml file, falling back to main config
net_type not found in io.yaml file, falling back to main config
train_mode not found in io.yaml file, falling back to main config
num_validation_per_exp not found in io.yaml file, falling back to main config
vol_size not found in io.yaml file, falling back to main config
nvox not found in io.yaml file, falling back to main config
max_num_samples not found in io.yaml file, falling back to main config
dannce_finetune_weights not found in io.yaml file, falling back to main config
mono not found in io.yaml file, falling back to main config
com_train_dir set to: .\COM\train_results
com_predict_dir set to: .\COM\predict_results
dannce_train_dir set to: .\DANNCE\train_results\AVG
dannce_predict_dir set to: .\DANNCE\predict_results
dannce_predict_model set to: .\DANNCE\train_results\AVG\weights.1200-12.77642.hdf5
exp set to: [{'label3d_file': 'E:/DANNCE_test_210608/20210610_091000_Label3D_dannce.mat'}]
io_config set to: io.yaml
new_n_channels_out set to: 22
batch_size set to: 4
epochs set to: 1200
net_type set to: AVG
train_mode set to: finetune
num_validation_per_exp set to: 4
vol_size set to: 100
nvox set to: 64
max_num_samples set to: max
dannce_finetune_weights set to: C:\Users\realtime\dannce\demo\markerless_mouse_1\DANNCE\train_results
mono set to: True
base_config set to: C:\Users\realtime\dannce\configs\dannce_mouse_config.yaml
viddir set to: videos
crop_height set to: None
crop_width set to: None
camnames set to: None
n_channels_out set to: 20
sigma set to: 10
verbose set to: 1
net set to: None
gpu_id set to: 0
immode set to: vid
mirror set to: False
loss set to: mask_nan_keep_loss
num_train_per_exp set to: None
metric set to: ['euclidean_distance_3D']
lr set to: 0.001
augment_hue set to: False
augment_brightness set to: False
augment_hue_val set to: 0.05
augment_bright_val set to: 0.05
augment_rotation_val set to: 5
data_split_seed set to: None
valid_exp set to: None
com_fromlabels set to: False
medfilt_window set to: None
com_file set to: None
new_last_kernel_size set to: [3, 3, 3]
n_layers_locked set to: 2
vmin set to: None
vmax set to: None
interp set to: nearest
depth set to: False
comthresh set to: 0
weighted set to: False
com_method set to: median
cthresh set to: None
channel_combo set to: None
predict_mode set to: torch
n_views set to: 6
rotate set to: True
augment_continuous_rotation set to: False
drop_landmark set to: None
use_npy set to: False
rand_view_replace set to: True
n_rand_views set to: 0
multi_gpu_train set to: False
start_batch set to: 0
n_channels_in set to: None
extension set to: None
vid_dir_flag set to: None
chunks set to: None
lockfirst set to: None
load_valid set to: None
raw_im_h set to: None
raw_im_w set to: None
n_instances set to: 1
start_sample set to: None
write_npy set to: None
expval set to: None
com_thresh set to: None
cam3_train set to: None
debug_volume_tifdir set to: None
downfac set to: None
from_weights set to: None
dannce_predict_vol_tifdir set to: None
Using the following *dannce.mat files: .\20210610_091000_Label3D_dannce.mat
Setting vid_dir_flag to True.
Setting extension to .avi.
Setting chunks to {'Camera1': array([0]), 'Camera2': array([0]), 'Camera3': array([0]), 'Camera4': array([0]), 'Camera5': array([0])}.
Setting n_channels_in to 3.
Setting raw_im_h to 600.
Setting raw_im_w to 960.
Setting expval to True.
Setting net to finetune_AVG.
Setting crop_height to [0, 600].
Setting crop_width to [0, 960].
Setting maxbatch to max.
Setting start_batch to 0.
Setting vmin to -50.0.
Setting vmax to 50.0.
Fine-tuning from C:\Users\realtime\dannce\demo\markerless_mouse_1\DANNCE\train_results\weights.12000-0.00014.hdf5
Experiment 0 using videos in E:/DANNCE_test_210608\videos
Experiment 0 using camnames: ['Camera1', 'Camera2', 'Camera3', 'Camera4', 'Camera5']
{'0_Camera1': array([0]), '0_Camera2': array([0]), '0_Camera3': array([0]), '0_Camera4': array([0]), '0_Camera5': array([0])}
E:/DANNCE_test_210608/20210610_091000_Label3D_dannce.mat
The length of the camnames list must divide evenly into 6. Duplicate a subset of the views starting from the first camera (y/n)?y
Duping camnames. Changed from ['Camera1', 'Camera2', 'Camera3', 'Camera4', 'Camera5'] to ['Camera1', 'Camera2', 'Camera3', 'Camera4', 'Camera5', 'Camera1']
Traceback (most recent call last):
File "C:\Users\realtime\anaconda3\envs\dannce\Scripts\dannce-train-script.py", line 33, in
sys.exit(load_entry_point('dannce', 'console_scripts', 'dannce-train')())
File "c:\users\realtime\dannce\dannce\cli.py", line 66, in dannce_train_cli
dannce_train(params)
File "c:\users\realtime\dannce\dannce\interface.py", line 737, in dannce_train
) = do_COM_load(exp, expdict, n_views, e, params)
File "c:\users\realtime\dannce\dannce\interface.py", line 1685, in do_COM_load
c3dfile = io.load_com(exp["com_file"])
File "c:\users\realtime\dannce\dannce\engine\io.py", line 91, in load_com
d = sio.loadmat(path)["com"]
KeyError: 'com'

Thanks for the help!

CUDA runtime implicit initialization on GPU:0 failed Status: device kernel image is invalid.

hi!!
i met a dannce version issue. it fail to run dannce-predict ../../configs/dannce_mouse_config.yaml.
the error is:
tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: device kernel image is invalid.

is there any conflicts between cuda and tenserflow? which version is fit with dannce 1.1.0??
cudatoolkit 10.1.243 h036e899_8 conda-forge
cudnn 7.6.5.32 hc0a50b0_1 conda-forge
python 3.7.10 hffdb5ce_100_cpython conda-forge
pytorch 1.7.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
tensorflow 2.3.0 pypi_0 pypi

please kindly advice me how to fix it. many thanks!!!!

Error when run "preprocess_dannce(filename_in,filename_out,species_name,'')"

Hi. I try to run the demo scrip preprocess_dannce, the error occur as below. i think it is better to modify the original code that can deal with the issue of infinite values. many thanks for your effort @spoonsso @jessedmarshall
################################################################
median filtering getting velocities
Error using filtfilt
Expected input to be finite.

Error in filtfilt>efiltfilt (line 114)
validateattributes(x,{'double'},{'finite','nonempty'},'filtfilt');

Error in filtfilt (line 89)
y=efiltfilt(b,a,x);

Error in compute_preprocessed_mocapstruct (line 44)
abs_velocity_antialiased(ll,:) = filtfilt(f1,f2, marker_velocity(ll,:,4));

Error in preprocess_ratception_struct_demo (line 43)
ratception_struct_temppreproc = compute_preprocessed_mocapstruct(ratception_struct_temp,preprocessing_parameters);

Error in preprocess_dannce (line 105)
ratception_struct = preprocess_ratception_struct_demo(datahere,preprocessing_parameters,params);

Fast Playback Speed when Recording Videos

Hello! I have been using part of the script acquire_calibration_3cam_mouse_clean.mat, the DANNCE camera calibration code, to record videos. However, I notice that the playback speed for these videos is faster than the speed at which the videos are supposed to be recording. I am using a Windows machine, and I can play my videos on VLC Media Player and Windows Media Player. Is there anything I can adjust in the code so that when I put a stopwatch in the arena, it takes one second in the video for one second to pass on the stopwatch? I am using three FLIR Blackfly S cameras, and my code is below.

imaqreset
numcams = 3;
vid = cell(1,numcams);
logfile = cell(1,numcams);

parentpath = 'D:\Camera_Videos\7_21_stopwatch';
numframes_aq = 1000;

logfile_tag = 'videofiles_run1';
lframe_tag = 'lframe_labels';

for kk = 1:numcams

vid{kk} = videoinput('gentl', kk,'Mono8');

src = getselectedsource(vid{kk});
src.GainAuto = 'Off';
src.GammaEnable = 'True';
src.ExposureAuto = 'Off';
src.AcquisitionFrameRateEnable ='True';
src.AcquisitionFrameRate = 80;

if kk ==1
    
    src.Gain = 18.06;
    src.Gamma = 0.8;
    src.ExposureTime = 59275; 

end
if kk == 2
    src.Gain = 17.97;
    src.Gamma = 0.8;
    src.ExposureTime = 60111; 
end
if kk == 3
    src.Gain = 11.82; 
    src.Gamma = 0.8;
    src.ExposureTime = 2831;
end

vid{kk}.FramesPerTrigger = 1; 
 vid{kk}.TriggerRepeat = 999; %1000

triggerconfig(vid{kk}, 'manual')

vid{kk}.ReturnedColorspace = 'rgb';
set(vid{kk}, 'LoggingMode', 'Disk&Memory');
logfile_names{kk} = strcat(parentpath,filesep,logfile_tag,num2str(kk),'.avi');
logfile{kk} = VideoWriter(logfile_names{kk});


vid{kk}.DiskLogger = logfile{kk};
open(logfile{kk})
start(vid{kk})

end

for ll =1:numframes_aq
fprintf('triggering \n')
for kk = 1:numcams
trigger(vid{kk})
end

pause(0.5)

end

frame_image = cell(1,numcams);

for kk = 1:numcams
frame_image{kk} = getsnapshot(vid{kk});
close(logfile{kk})
end

lframename = strcat(parentpath,filesep,lframe_tag,'.mat');
save(lframename,'frame_image');

Folders and Documents Generated after Training DANNCE

Hello, I am inquiring about the folders and documents that were generated in a project's train_results folder after I executed the commands under the headings titled "Training and Predicting with the COMfinder U-Net" and "Training and Predicting with DANNCE" on the DANNCE GitHub homepage. I have attached an image of the documents and folders that have been created. Can you please tell me how to continue setting up DANNCE with these items?
new train_results documents

How to Interpret Data Generated from Quickstart Demo

I successfully completed the steps on the DANNCE README regarding the Quickstart Demo, and new files were generated. However, the instructions for the Quickstart Demo do not explain how to read or interpret the results. Can you please provide more information or a link on this?

"ValueError: generator already executing" when running dannce-predict with 3 cameras

Hi,
We're using DANNCE with a 3 camera setup. We'd like to use the pre-trained 6-camera network (weights.rat.AVG.6cam.hdf5) and finetune the model. According to the wiki, DANNCE should duplicate the 3 views to feed the 6 heads in the model. However, while dannce-train works well with the default n_rand_views, the dannce-predict runs into the following error:

    2021-08-03 16:45:36.202237: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
    io_config not found in io.yaml file, falling back to main config
    new_n_channels_out not found in io.yaml file, falling back to main config
    batch_size not found in io.yaml file, falling back to main config
    epochs not found in io.yaml file, falling back to main config
    net_type not found in io.yaml file, falling back to main config
    train_mode not found in io.yaml file, falling back to main config
    num_validation_per_exp not found in io.yaml file, falling back to main config
    vol_size not found in io.yaml file, falling back to main config
    nvox not found in io.yaml file, falling back to main config
    max_num_samples not found in io.yaml file, falling back to main config
    dannce_finetune_weights not found in io.yaml file, falling back to main config
    com_train_dir set to: .\COM\train_results\
    com_predict_dir set to: .\COM\predict_results\
    dannce_train_dir set to: .\DANNCE\train_results\
    dannce_predict_dir set to: .\DANNCE\predict_results\
    exp set to: [{'label3d_file': './20210803_142811_Label3D_dannce.mat'}]
    io_config set to: io.yaml
    new_n_channels_out set to: 16
    batch_size set to: 1
    epochs set to: 100
    net_type set to: AVG
    train_mode set to: finetune
    num_validation_per_exp set to: 4
    vol_size set to: 120
    nvox set to: 64
    max_num_samples set to: 100
    dannce_finetune_weights set to: .\DANNCE\weights\
    base_config set to: C:\Users\banerjeelab\Projects\dannce\configs\dannce_mouse_config.yaml
    viddir set to: videos
    crop_height set to: None
    crop_width set to: None
    camnames set to: None
    n_channels_out set to: 20
    sigma set to: 10
    verbose set to: 1
    net set to: None
    gpu_id set to: 0
    immode set to: vid
    mono set to: False
    mirror set to: False
    start_batch set to: 0
    start_sample set to: None
    com_fromlabels set to: False
    medfilt_window set to: None
    com_file set to: None
    new_last_kernel_size set to: [3, 3, 3]
    n_layers_locked set to: 2
    vmin set to: None
    vmax set to: None
    interp set to: nearest
    depth set to: False
    comthresh set to: 0
    weighted set to: False
    com_method set to: median
    cthresh set to: None
    channel_combo set to: None
    predict_mode set to: torch
    n_views set to: 6
    dannce_predict_model set to: None
    expval set to: None
    from_weights set to: None
    write_npy set to: None
    loss set to: mask_nan_keep_loss
    n_channels_in set to: None
    extension set to: None
    vid_dir_flag set to: None
    num_train_per_exp set to: None
    chunks set to: None
    lockfirst set to: None
    load_valid set to: None
    augment_hue set to: False
    augment_brightness set to: False
    augment_hue_val set to: 0.05
    augment_bright_val set to: 0.05
    augment_rotation_val set to: 5
    drop_landmark set to: None
    raw_im_h set to: None
    raw_im_w set to: None
    n_instances set to: 1
    use_npy set to: False
    data_split_seed set to: None
    valid_exp set to: None
    metric set to: ['euclidean_distance_3D']
    lr set to: 0.001
    rotate set to: True
    augment_continuous_rotation set to: False
    com_thresh set to: None
    cam3_train set to: None
    debug_volume_tifdir set to: None
    downfac set to: None
    dannce_predict_vol_tifdir set to: None
    n_rand_views set to: 0
    rand_view_replace set to: True
    multi_gpu_train set to: False
    heatmap_reg set to: False
    heatmap_reg_coeff set to: 0.01
    save_pred_targets set to: False
    Using the following *dannce.mat files: .\20210803_142811_Label3D_dannce.mat
    Setting vid_dir_flag to True.
    Setting extension to .mp4.
    Setting chunks to {'Camera0': array([0]), 'Camera1': array([0]), 'Camera2': array([0])}.
    Setting n_channels_in to 3.
    Setting raw_im_h to 1024.
    Setting raw_im_w to 1152.
    Setting expval to True.
    Setting net to finetune_AVG.
    Setting crop_height to [0, 1024].
    Setting crop_width to [0, 1152].
    Setting maxbatch to 100.
    Setting start_batch to 0.
    Setting vmin to -60.0.
    Setting vmax to 60.0.
    Using the following *dannce.mat files: .\20210803_142811_Label3D_dannce.mat
    Using torch predict mode
    Using camnames: ['Camera0', 'Camera1', 'Camera2']
    Experiment 0 using com3d: .\20210803_142811_Label3D_dannce.mat
    Removed 909 samples from the dataset because they either had COM positions over cthresh, or did not have matching sampleIDs in the COM file
    Saving 3D COM to .\DANNCE\predict_results\com3d_used.mat
    None
    2021-08-03 16:45:38.577213: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2021-08-03 16:45:38.583397: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x29558745c20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
    2021-08-03 16:45:38.583437: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
    2021-08-03 16:45:38.585008: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library nvcuda.dll
    2021-08-03 16:45:38.608816: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
    pciBusID: 0000:01:00.0 name: Quadro RTX 4000 computeCapability: 7.5
    coreClock: 1.545GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 387.49GiB/s
    2021-08-03 16:45:38.608947: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
    2021-08-03 16:45:38.609489: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
    2021-08-03 16:45:38.609579: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
    2021-08-03 16:45:38.609668: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
    2021-08-03 16:45:38.609769: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
    2021-08-03 16:45:38.609861: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
    2021-08-03 16:45:38.609963: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
    2021-08-03 16:45:38.610081: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
    2021-08-03 16:45:39.020185: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
    2021-08-03 16:45:39.020271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0
    2021-08-03 16:45:39.020294: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N
    2021-08-03 16:45:39.021020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3686 MB memory) -> physical GPU (device: 0, name: Quadro RTX 4000, pci bus id: 0000:01:00.0, compute capability: 7.5)
    2021-08-03 16:45:39.023501: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x29503d12ec0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
    2021-08-03 16:45:39.023585: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Quadro RTX 4000, Compute Capability 7.5
    Init took 0.44759035110473633 sec.
    Initializing Network...
    Loading model from .\DANNCE\train_results\weights.97-21.50072.hdf5
    Predicting on batch 0
    c:\users\banerjeelab\projects\dannce\dannce\engine\generator.py:1221: UserWarning: Note: ignoring dimension mismatch in 3D labels
      warnings.warn(msg)
    Loading new video: videos\Camera1\0.mp4 for 0_Camera1
    Loading new video: videos\Camera0\0.mp4 for 0_Camera0
    Loading new video: videos\Camera1\0.mp4 for 0_Camera1
    Loading new video: videos\Camera2\0.mp4 for 0_Camera2
    Loading new video: videos\Camera0\0.mp4 for 0_Camera0
    Loading new video: videos\Camera2\0.mp4 for 0_Camera2
    Traceback (most recent call last):
      File "C:\Users\banerjeelab\anaconda3\envs\dannce\Scripts\dannce-predict-script.py", line 33, in <module>
        sys.exit(load_entry_point('dannce', 'console_scripts', 'dannce-predict')())
      File "c:\users\banerjeelab\projects\dannce\dannce\cli.py", line 54, in dannce_predict_cli
        dannce_predict(params)
      File "c:\users\banerjeelab\projects\dannce\dannce\interface.py", line 1596, in dannce_predict
        n_chn,
      File "c:\users\banerjeelab\projects\dannce\dannce\engine\inference.py", line 696, in infer_dannce
        ims = generator.__getitem__(i)
      File "c:\users\banerjeelab\projects\dannce\dannce\engine\generator.py", line 966, in __getitem__
        X, y = self.__data_generation(list_IDs_temp)
      File "c:\users\banerjeelab\projects\dannce\dannce\engine\generator.py", line 1258, in __data_generation
        result = self.threadpool.starmap(self.project_grid, arglist)
      File "C:\Users\banerjeelab\anaconda3\envs\dannce\lib\multiprocessing\pool.py", line 276, in starmap
        return self._map_async(func, iterable, starmapstar, chunksize).get()
      File "C:\Users\banerjeelab\anaconda3\envs\dannce\lib\multiprocessing\pool.py", line 657, in get
        raise self._value
      File "C:\Users\banerjeelab\anaconda3\envs\dannce\lib\multiprocessing\pool.py", line 121, in worker
        result = (True, func(*args, **kwds))
      File "C:\Users\banerjeelab\anaconda3\envs\dannce\lib\multiprocessing\pool.py", line 47, in starmapstar
        return list(itertools.starmap(args[0], args[1]))
      File "c:\users\banerjeelab\projects\dannce\dannce\engine\generator.py", line 1028, in project_grid
        extension=self.extension,
      File "c:\users\banerjeelab\projects\dannce\dannce\engine\video.py", line 231, in load_vid_frame
        self.currvideo[camname].close() if self.predict_flag else \
      File "C:\Users\banerjeelab\anaconda3\envs\dannce\lib\site-packages\imageio\core\format.py", line 259, in close
        self._close()
      File "C:\Users\banerjeelab\anaconda3\envs\dannce\lib\site-packages\imageio\plugins\ffmpeg.py", line 343, in _close
        self._read_gen.close()
    ValueError: generator already executing

Thank you in advance for your help!!

Error in DANNCE Calibration Code

Hello! I am working to calibrate the cameras using the code in acquire_calibration_3cam_mouse_clean.m. However, I receive an error message on the following line: [rotationMatrix{kk},translationVector{kk}] = cameraPoseToExtrinsics(worldOrientation{kk},worldLocation{kk});

The error message states "Expected orientation to be finite." As I was exploring the code, I noticed that the Lframe coordinates are arranged in a 4 x 3 array, while the points generated from labeling the Lframe are in a 4 x 2 array. In the script titled estimateWorldCameraPose.m, the pack function fuses these two arrays together to form a 4 x 5 array. This array then gets stored as the variable allPoints in the msac function, which should be an M-by-2 array.

I think this error is caused by the data not being stored in an M-by-2 array. Could you show me how to fix this issue with regards to the three-dimensional Lframe data and the two-dimensional array generated from labeling the Lframe?

Thank you!

Support for marmoset data. Is a corresponding pretrained_weight needed?

I noted that when finetuning from the rat MAX pretrained_weight(3 cams) to fit marmoset dataset , val_loss is still high (value around 47 ) .

So, will I need a marmoset based pretrained_weight or train from scratch?

Since we currently lack hand-labeled data, we can't get at least 10k labeled frame to train dannce from scratch.

Is there any way to avoid the trouble?

com-train : Mismatch in dimensions

Hi,
For fine tuning of weights for COM prediction, I receive the following error, which appears to be the result of mismatched dimensions of the model weights. I used the COM weights from markerless mouse 1 demo.

2021-07-11 16:24:51.347157: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
io_config not found in io.yaml file, falling back to main config
batch_size not found in io.yaml file, falling back to main config
epochs not found in io.yaml file, falling back to main config
downfac not found in io.yaml file, falling back to main config
lr not found in io.yaml file, falling back to main config
num_validation_per_exp not found in io.yaml file, falling back to main config
max_num_samples not found in io.yaml file, falling back to main config
com_finetune_weights not found in io.yaml file, falling back to main config
crop_height not found in io.yaml file, falling back to main config
crop_width not found in io.yaml file, falling back to main config
mono not found in io.yaml file, falling back to main config
com_train_dir set to: .\COM\train_results
com_predict_dir set to: .\COM\predict_results
dannce_train_dir set to: .\DANNCE\train_results\AVG
dannce_predict_dir set to: .\DANNCE\predict_results
dannce_predict_model set to: .\DANNCE\train_results\AVG\weights.1200-12.77642.hdf5
exp set to: [{'label3d_file': 'E:/DANNCE_test_210608/20210610_091000_Label3D_dannce.mat'}]
io_config set to: io.yaml
batch_size set to: 2
epochs set to: 3
downfac set to: 2
lr set to: 5e-5
num_validation_per_exp set to: 2
max_num_samples set to: max
com_finetune_weights set to: .\COM\weights
crop_height set to: [0, 960]
crop_width set to: [0, 576]
mono set to: True
base_config set to: C:\Users\realtime\dannce\configs\com_mouse_config.yaml
viddir set to: videos
camnames set to: None
n_channels_out set to: 1
sigma set to: 30
verbose set to: 1
net set to: unet2d_fullbn
gpu_id set to: 0
immode set to: vid
mirror set to: False
loss set to: mask_nan_keep_loss
num_train_per_exp set to: None
augment_hue set to: False
augment_brightness set to: False
augment_hue_val set to: 0.05
augment_bright_val set to: 0.05
augment_rotation_val set to: 5
data_split_seed set to: None
valid_exp set to: None
dsmode set to: nn
debug set to: False
augment_shift set to: False
augment_zoom set to: False
augment_shear set to: False
augment_rotation set to: False
augment_shear_val set to: 5
augment_zoom_val set to: 0.05
augment_shift_val set to: 0.05
start_batch set to: 0
n_channels_in set to: None
extension set to: None
n_views set to: 6
vid_dir_flag set to: None
chunks set to: None
lockfirst set to: None
load_valid set to: None
drop_landmark set to: None
raw_im_h set to: None
raw_im_w set to: None
n_instances set to: 1
start_sample set to: 0
write_npy set to: None
use_npy set to: False
com_predict_weights set to: None
com_debug set to: None
com_exp set to: None
Using the following *dannce.mat files: .\20210610_091000_Label3D_dannce.mat
Setting vid_dir_flag to True.
Setting extension to .avi.
Setting chunks to {'Camera1': array([0]), 'Camera2': array([0]), 'Camera3': array([0]), 'Camera4': array([0]), 'Camera5': array([0])}.
Setting n_channels_in to 3.
Setting raw_im_h to 600.
Setting raw_im_w to 960.
Experiment 0 using videos in E:/DANNCE_test_210608\videos
Experiment 0 using camnames: ['Camera1', 'Camera2', 'Camera3', 'Camera4', 'Camera5']
{'0_Camera1': array([], dtype=float64), '0_Camera2': array([], dtype=float64), '0_Camera3': array([], dtype=float64), '0_Camera4': array([], dtype=float64), '0_Camera5': array([], dtype=float64)}
E:/DANNCE_test_210608/20210610_091000_Label3D_dannce.mat
Using nn downsampling
TRAIN EXPTS: [0]
Initializing Network...
2021-07-11 16:24:56.706552: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library nvcuda.dll
2021-07-11 16:24:56.746815: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:65:00.0 name: TITAN RTX computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 24.00GiB deviceMemoryBandwidth: 625.94GiB/s
2021-07-11 16:24:56.753051: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2021-07-11 16:24:56.756786: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2021-07-11 16:24:56.759787: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2021-07-11 16:24:56.762037: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2021-07-11 16:24:56.764458: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2021-07-11 16:24:56.768320: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2021-07-11 16:24:56.770936: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2021-07-11 16:24:56.773126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-07-11 16:24:56.776585: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-11 16:24:56.801838: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x271fe1b4680 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-11 16:24:56.804834: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-07-11 16:24:56.807370: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:65:00.0 name: TITAN RTX computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 24.00GiB deviceMemoryBandwidth: 625.94GiB/s
2021-07-11 16:24:56.813101: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2021-07-11 16:24:56.816043: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2021-07-11 16:24:56.819416: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2021-07-11 16:24:56.823680: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2021-07-11 16:24:56.826244: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2021-07-11 16:24:56.828835: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2021-07-11 16:24:56.833976: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2021-07-11 16:24:56.836278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-07-11 16:24:57.478459: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-11 16:24:57.483145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2021-07-11 16:24:57.485379: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2021-07-11 16:24:57.489168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 19144 MB memory) -> physical GPU (device: 0, name: TITAN RTX, pci bus id: 0000:65:00.0, compute capability: 7.5)
2021-07-11 16:24:57.499705: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x271ae9c3330 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-07-11 16:24:57.503910: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): TITAN RTX, Compute Capability 7.5
COMPLETE

Note: model weights could not be loaded due to a mismatch in dimensions. Assuming that this is a fine-tune with a different number of outputs and removing the top of the net accordingly
Traceback (most recent call last):
File "c:\users\realtime\dannce\dannce\interface.py", line 453, in com_train
os.path.join(params["com_finetune_weights"], weights)
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2211, in load_weights
hdf5_format.load_weights_from_hdf5_group(f, self.layers)
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 708, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\backend.py", line 3576, in batch_set_value
x.assign(np.asarray(value, dtype=dtype(x)))
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 858, in assign
self._shape.assert_is_compatible_with(value_tensor.shape)
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 1134, in assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (3, 3, 1, 32) and (32, 3, 3, 3) are incompatible

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2762, in setattr
super(tracking.AutoTrackable, self).setattr(name, value)
AttributeError: can't set attribute

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\realtime\anaconda3\envs\dannce\Scripts\com-train-script.py", line 33, in
sys.exit(load_entry_point('dannce', 'console_scripts', 'com-train')())
File "c:\users\realtime\dannce\dannce\cli.py", line 42, in com_train_cli
com_train(params)
File "c:\users\realtime\dannce\dannce\interface.py", line 461, in com_train
model.layers[-1].name = "top_conv"
File "C:\Users\realtime\anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2767, in setattr
'different name.').format(name))
AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only @Property of the object. Please choose a different name.

Please could you suggest what might be going wring and I how I may fix this issue.

Thanks a lot!

ValueError: no field of name val_loss

Epoch 600/600
20/20 [==============================] - ETA: 0s - loss: 179.8598 - euclidean_distance_3D: 11.5716WARNING:tensorflow:Can save best model only with val_loss available, skipping.
Saving predictions on train and validation data, after epoch 599
20/20 [==============================] - 20s 998ms/step - loss: 179.8598 - euclidean_distance_3D: 11.5716
Renaming weights file with best epoch description
Traceback (most recent call last):
File "/home/xuchun/.conda/envs/dannce/bin/dannce-train", line 33, in
sys.exit(load_entry_point('dannce', 'console_scripts', 'dannce-train')())
File "/home/xuchun/dannce/dannce/cli.py", line 66, in dannce_train_cli
dannce_train(params)
File "/home/xuchun/dannce/dannce/interface.py", line 1249, in dannce_train
processing.rename_weights(dannce_train_dir, kkey, mon)
File "/home/xuchun/dannce/dannce/engine/processing.py", line 485, in rename_weights
q = r[mon]
ValueError: no field of name val_loss

When I finished training 3cam AVG net of dannce, this happened.
How can I solve it?

ValueError: generator already executing.

When I ran dannce-predict, it gave the following message:

Traceback (most recent call last):
File "/home/qiushou/anaconda3/envs/dannce/bin/dannce-predict", line 33, in
sys.exit(load_entry_point('dannce', 'console_scripts', 'dannce-predict')())
File "/home/xulab/Documents/dannce/dannce/cli.py", line 54, in dannce_predict_cli
dannce_predict(params)
File "/home/xulab/Documents/dannce/dannce/interface.py", line 1577, in dannce_predict
n_chn,
File "/home/xulab/Documents/dannce/dannce/engine/inference.py", line 696, in infer_dannce
ims = generator.getitem(i)
File "/home/xulab/Documents/dannce/dannce/engine/generator.py", line 966, in getitem
X, y = self.__data_generation(list_IDs_temp)
File "/home/xulab/Documents/dannce/dannce/engine/generator.py", line 1258, in __data_generation
result = self.threadpool.starmap(self.project_grid, arglist)
File "/home/qiushou/anaconda3/envs/dannce/lib/python3.7/multiprocessing/pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "/home/qiushou/anaconda3/envs/dannce/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
File "/home/qiushou/anaconda3/envs/dannce/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/qiushou/anaconda3/envs/dannce/lib/python3.7/multiprocessing/pool.py", line 47, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "/home/xulab/Documents/dannce/dannce/engine/generator.py", line 1028, in project_grid
extension=self.extension,
File "/home/xulab/Documents/dannce/dannce/engine/video.py", line 238, in load_vid_frame
im = vid.get_data(frame_num).astype("uint8") if self.predict_flag
File "/home/qiushou/anaconda3/envs/dannce/lib/python3.7/site-packages/imageio/core/format.py", line 346, in get_data
im, meta = self._get_data(index, **kwargs)
File "/home/qiushou/anaconda3/envs/dannce/lib/python3.7/site-packages/imageio/plugins/ffmpeg.py", line 384, in _get_data
result, is_new = self._read_frame()
File "/home/qiushou/anaconda3/envs/dannce/lib/python3.7/site-packages/imageio/plugins/ffmpeg.py", line 486, in _read_frame
s = self._read_gen.next()
ValueError: generator already executing

Visualise pose prediction

Hi, I'm following the 'get started' demo, however it abruptly stops after generating save_data_AVG0.mat. It'd be great if you could add the missing steps to generate the examples shown in the readme.

parameters when preprocessing the dance data for a mouse

Hi Tim and Jesse, could you please help to explain the parameters when preprocessing the dance data for a mouse?
because I am not sure the unit of these measurements and the corresponding values for a mouse, rather than a rat.
Thank you very much for your attention.

preprocessing_parameters.median_filt_length = 3;
preprocessing_parameters.bad_frame_vel_thresh = ??
preprocessing_parameters.bad_frame_surround_flag = 0;
preprocessing_parameters.bad_frame_surround_number = 1;
%% preprocessing_parameters.interpolation_max_length = 5;
preprocessing_parameters.meanvelocity_lowpass = 60;
preprocessing_parameters.meanvelocity_lowpass = 60;
preprocessing_parameters.fastvelocity_threshold =??
preprocessing_parameters.moving_threshold = ??
preprocessing_parameters.moving_framewindow =??

explore predict_generators for COM and DANNCE

@ksseverson57

It's possible we may see continued speed ups if we go back to a generator setup for dannce & com prediction. previously I had concluded that asynchronous workers during video loading was problematic, in terms of moving the video decoder head out of order. also, in TF2 setting workers > 1 results in some ominous warnings.

rat model weights

Hi, are the weights of networks trained on rats that are mentioned in the paper available? The provided models seem to be the models fine-tuned on mice (since network output has 22 landmarks rather than 20).

How to use "finetune" to train the DANNCE network

We are trying to use the rat.max weights to finetune our dannce model. We used either 5 or 14 labels, but can't get dannce-train to work.

In the dannce_config file we tried to use the following settings:
new_n_channels_out: 5
train_mode: finetune
dannce_finetune_weights: C:\dannce-1.2.0\demo\markerless_mouse_1\DANNCE\weights\weights.rat.MAX
net_type: AVG

This is the error we get:
Traceback (most recent call last):
File "C:\Users\YoramG\Anaconda3\envs\dannce\Scripts\dannce-train-script.py", line 33, in
sys.exit(load_entry_point('dannce', 'console_scripts', 'dannce-train')())
File "c:\dannce-1.2.0\dannce\cli.py", line 66, in dannce_train_cli
dannce_train(params)
File "c:\dannce-1.2.0\dannce\interface.py", line 1129, in dannce_train
gridsize=gridsize,
File "c:\dannce-1.2.0\dannce\engine\nets.py", line 1125, in finetune_AVG
model = renameLayers(model, weightspath)
File "c:\dannce-1.2.0\dannce\engine\nets.py", line 1204, in renameLayers
model.load_weights(weightspath, by_name=True)
File "C:\Users\YoramG\Anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2209, in load_weights
f, self.layers, skip_mismatch=skip_mismatch)
File "C:\Users\YoramG\Anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 759, in load_weights_from_hdf5_group_by_name
layer, weight_values, original_keras_version, original_backend)
File "C:\Users\YoramG\Anaconda3\envs\dannce\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 403, in preprocess_weights_for_loading
weights[0] = np.transpose(weights[0], (3, 2, 0, 1))
File "<array_function internals>", line 6, in transpose
File "C:\Users\YoramG\Anaconda3\envs\dannce\lib\site-packages\numpy\core\fromnumeric.py", line 651, in transpose
return _wrapfunc(a, 'transpose', axes)
File "C:\Users\YoramG\Anaconda3\envs\dannce\lib\site-packages\numpy\core\fromnumeric.py", line 61, in _wrapfunc
return bound(*args, **kwds)
ValueError: axes don't match array

Error cloning the repository

Hello!

Promising tool for behavioral experiments, looks much better than DLC!

However I guess some parts of the repository is still private as I am getting the following error when I try to clone the repository:

(dannce) C:\Users\serce>git clone --recursive https://github.com/spoonsso/dannce
Cloning into 'dannce'...
remote: Enumerating objects: 195, done.
remote: Counting objects: 100% (195/195), done.
remote: Compressing objects: 100% (136/136), done.
remote: Total 2648 (delta 130), reused 112 (delta 59), pack-reused 2453
Receiving objects: 100% (2648/2648), 2.45 GiB | 23.46 MiB/s, done.
Resolving deltas: 100% (1582/1582), done.
Updating files: 100% (137/137), done.
Submodule 'Label3D' (https://github.com/diegoaldarondo/Label3D/) registered for path 'Label3D'
Submodule 'campy' (https://github.com/ksseverson57/campy/) registered for path 'campy'
Cloning into 'C:/Users/serce/dannce/Label3D'...
remote: Enumerating objects: 50, done.
remote: Counting objects: 100% (50/50), done.
remote: Compressing objects: 100% (34/34), done.
remote: Total 892 (delta 23), reused 39 (delta 16), pack-reused 842
Receiving objects: 100% (892/892), 128.81 MiB | 24.83 MiB/s, done.
Resolving deltas: 100% (394/394), done.
Cloning into 'C:/Users/serce/dannce/campy'...
remote: Enumerating objects: 91, done.
remote: Counting objects: 100% (91/91), done.
remote: Compressing objects: 100% (88/88), done.
remote: Total 341 (delta 39), reused 0 (delta 0), pack-reused 250
Receiving objects: 100% (341/341), 102.23 KiB | 376.00 KiB/s, done.
Resolving deltas: 100% (154/154), done.
Submodule path 'Label3D': checked out '83c92cec6787689cf6ab16610c2bfc7d1ba08357'
Submodule 'deps/Animator' ([email protected]:diegoaldarondo/Animator.git) registered for path 'Label3D/deps/Animator'
Cloning into 'C:/Users/serce/dannce/Label3D/deps/Animator'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:diegoaldarondo/Animator.git' into submodule path 'C:/Users/serce/dannce/Label3D/deps/Animator' failed
Failed to clone 'deps/Animator'. Retry scheduled
Cloning into 'C:/Users/serce/dannce/Label3D/deps/Animator'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:diegoaldarondo/Animator.git' into submodule path 'C:/Users/serce/dannce/Label3D/deps/Animator' failed
Failed to clone 'deps/Animator' a second time, aborting
Submodule path 'campy': checked out 'fd120a2df65e4f01a2af9c401585dfceba4d5d52'
Failed to recurse into submodule path 'Label3D'

Could you perhaps provide some assistance with the issue?

Best regards,
Onur

UnboundLocalError: local variable 'batch_outputs' referenced before assignment

Epoch 150/150
42/42 [==============================] - 13s 321ms/step - loss: 5.8465e-05
Traceback (most recent call last):
File "/home/xuchun/.conda/envs/dannce/bin/com-train", line 33, in
sys.exit(load_entry_point('dannce', 'console_scripts', 'com-train')())
File "/home/xuchun/dannce/dannce/cli.py", line 42, in com_train_cli
com_train(params)
File "/home/xuchun/dannce/dannce/interface.py", line 661, in com_train
write_debug(trainData=False)
File "/home/xuchun/dannce/dannce/interface.py", line 624, in write_debug
label_out = model.predict(ims_valid, batch_size=1)
File "/home/xuchun/.conda/envs/dannce/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 130, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/xuchun/.conda/envs/dannce/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1614, in predict
all_outputs = nest.map_structure_up_to(batch_outputs, concat, outputs)
UnboundLocalError: local variable 'batch_outputs' referenced before assignment

When I finished training COM-finder, this happened.

How can I fix this?

move load_camera_Params etc. into one of the engine py files?

https://github.com/spoonsso/dannce/blob/54af2cfe666ff38df9de601b2ef97965db4ecde3/dannce/utils/__init__.py

currently "utils" is sort of a hodge podge of data utilities that are used process dannce predictions, manipulate neural network stuff, etc. maybe these new function in __init__.py would be a better fit for engine/processing.py, or perhaps we can create a new engine/io.py file?

@diegoaldarondo it would also be nice to change the name of the "params" cell array inside dannce.mat to 'CamParams' or something, so it is more intuitive if someone loads in dannce.mat with matlab without consulting any documentation.

Support for tf 2.4 and above (for 30xx GPU)

Dear developer of Dannce,

The 3090/3080 etc. Nvidia GPUs are only working on tf 2.4 and above (CUDA 11 and above).

However, for the Dannce, the tf 2.3 is required.

Is it possible to provide support for tf 2.4, so that people could use the lastest GPU to run Dannce.

Thanks!

How to Calibrate Using DANNCE

Hello! I am reaching out to ask for a detailed process on how to calibrate the cameras in DANNCE using the script titled acquire_calibration_3cam_mouse_clean. I have performed many trials in order to calibrate the cameras in my open field, but I am not sure how the checkerboard and the lframe should be placed in order for the script acquire_calibration_3cam_mouse_clean to run successfully or for the reprojected points to be identical to the detected points on all three cameras.

I also have been getting an error when labeling the points on my lframe on the image from the final camera view in Step 4 of acquire_calibration_3cam_mouse_clean. So, I have restructured the script so that I have been placing the checkerboard directly in front of each camera when the 100 frames are taken (I then move it so that it is in the front of another camera when another set of 100 frames is acquired), and I have been doing the same with the lframe when the first shot of the lframe is taken. I then place the checkerboard and the lframe in the center of the arena for the next shots of the checkerboard and lframe. These changes enable my reprojected points to be practically identical to my detected points when an image of the checkerboard appears in the code corresponding to Step 3 in acquire_calibration_3cam_mouse_clean. My script omits Steps 5 and 6 of acquire_calibration_3cam_mouse_clean. My arena has a plexiglass cylinder in it, which should be removed for the calibration, according to the instructions on the main page.

Could you describe how to successfully calibrate the cameras in an open field with an arena lined with a plexiglass cylinder?

Remove dead code

In ops.py, nets.py, processing.py,losses.py

Once everything is working.

DANNCE support for different sized/proportioned animals and vibration canceling methodologies

Hello! We want to migrate fro DLC to DANNCE. Before getting started, we wanted to verify a few things.

First, does DANNCE have any issues with different sized animals? We are dealing with marmosets and want to capture data from both small youths to fully grown adults. Would we have to make a different data set for the smaller animals vs the bigger ones or can we use the same data set for both? Since, DANNCE knows where the position of the markers relative to each other, would this cause a problem for different sized/proportioned animals?

Secondly, due to portability, our cameras are attached to the animal setup. As such there is some vibration transmitted to the cameras. Would it be a problem for DANNCE to track the corners of the enclosure so that we can take out vibration noise in post? I assume that if they aren't attached to the skeleton then there shouldn't be a problem but since DANNCE tracks where the points are relative to other points I don't know if this would be a problem later on.

If there is somewhere else I should ask these questions please let me know and I will be happy to move my questions there. Thank you for your help!

Pose Estimation Using DANNCE

In the paper on DANNCE, specific poses are able to be identified. However, I do not see any information regarding poses in any of the files generated for the DANNCE demo. Could you give me instructions on how to generate and interpret rodent poses using DANNCE?

Wand Calibration

Hi,

I was wondering what would the best software be to use wand calibration and put it in a format that can be used with DANNCE.

Thanks!

Importing Experimental predictions.mat Data into CAPTURE_demo

Hello, I am referring back to Issue #43 , predictions.mat File in CAPTURE_demo Code. It was mentioned that the predictions.mat file generated from the demo could not be imported into the CAPTURE_demo code, and I had to download another demo predictions file for the CAPTURE_demo code to run. I have looked at other pose-estimation software, but they can not accommodate the three-dimensional nature of the predictions from DANNCE. If you could work on or provide the Bash script that executes dannce-predict followed by makeStructuredDataNoMocap.py so that experimental predictions data from DANNCE can be interpreted, that would be appreciated!

Geneating video with skeleton overlayed

Hi!
I was wondering if I could generate a video clip with the skeleton overlayed to get an idea of the performance of the network as in the supplementary video. Is there a tool as a part of the package and if not could you please give some pointers as to how this may be achieved.

Thanks!

Distances between Cameras and Arena Floor/Walls

Hello! I was looking over the methods section of the DANNCE paper, and I was just wondering about a couple distances that I wanted to take into consideration while recording data. Can you tell me what the average distance was between the camera and the floor of the arena, as well as the average distance between a camera and the sides of the arena? Thank you!

com_finetune_weights file

Hello Timothy. May I please ask you to provide some explain about the file used for fine-tune? Below is my steps for your demo data. Thank you very much if you could give your comments.

  1. com-train ../../configs/com_mouse_config.yaml
    com_finetune_weights: ???
    Q1: Does COM net use finetune train_model?
    Q2: if so, where can I find the pre-trained weight files for mouse COM trained model?
    Q3: is it ok if i use your pre-trained mouse weight file for our lab mouse data training?

  2. com-predict ../../configs/com_mouse_config.yaml
    Q4: Does COM use pre-trained COM finetune weights file that the same as the first step or generated by com-train.py in ../markerless_mouse_2/COM/train_results to predict?

  3. dannce-train ../../configs/dannce_mouse_config.yaml
    dannce_finetune_weights: ???
    Q5: Does the dannce trained network use the finetune weight file generated by com-train.py in ../markerless_mouse_2/COM/train_results?

  4. dannce-predict ../../configs/dannce_mouse_config.yaml
    Q6: which weight file is used for dannce predict model?

  5. python ../../dannce/utils/makeStructuredDataNoMocap.py ./DANNCE/predict_results/save_data_AVG0.mat ../../configs/mouse22_skeleton.mat ./label3d_dannce.

predictions.mat File in CAPTURE_demo Code

Thank you for responding so quickly!
I have read through the response you have provided, and I have been using the CAPTURE_demo code. However, I am unable to find a file titled predictions.mat in the DANNCE demo code, which was referenced in the preprocess_dannce.m file from the CAPTURE_demo code. I have successfully cloned the master branch of DANNCE, and I have ran the DANNCE Quickstart Demo. I did see a file titled predictions.mat at the bottom of the CAPTURE_demo code GitHub page, but I am not sure how to generate this file when running DANNCE.

com-train: Videos not detected- IndexError: list index out of range

Hi,
I am trying to launch the COM train but receive the following error, seems to me the videos are not detected? There are 5 cameras and the videos are in .avi format.

(dannce) E:\DANNCE_test_210608>com-train C:\Users\realtime\dannce\configs\com_mouse_config.yaml
2021-07-08 18:51:47.037025: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
io_config not found in io.yaml file, falling back to main config
batch_size not found in io.yaml file, falling back to main config
epochs not found in io.yaml file, falling back to main config
downfac not found in io.yaml file, falling back to main config
lr not found in io.yaml file, falling back to main config
num_validation_per_exp not found in io.yaml file, falling back to main config
max_num_samples not found in io.yaml file, falling back to main config
com_finetune_weights not found in io.yaml file, falling back to main config
crop_height not found in io.yaml file, falling back to main config
crop_width not found in io.yaml file, falling back to main config
mono not found in io.yaml file, falling back to main config
com_train_dir set to: .\COM\train_results
com_predict_dir set to: .\COM\predict_results
dannce_train_dir set to: .\DANNCE\train_results\AVG
dannce_predict_dir set to: .\DANNCE\predict_results
dannce_predict_model set to: .\DANNCE\train_results\AVG\weights.1200-12.77642.hdf5
exp set to: [{'label3d_file': 'E:/DANNCE_test_210608/20210610_091000_Label3D_dannce.mat'}]
io_config set to: io.yaml
batch_size set to: 2
epochs set to: 3
downfac set to: 2
lr set to: 5e-5
num_validation_per_exp set to: 2
max_num_samples set to: max
com_finetune_weights set to: .\COM\weights
crop_height set to: [0, 960]
crop_width set to: [0, 576]
mono set to: True
base_config set to: C:\Users\realtime\dannce\configs\com_mouse_config.yaml
viddir set to: videos
camnames set to: None
n_channels_out set to: 1
sigma set to: 30
verbose set to: 1
net set to: unet2d_fullbn
gpu_id set to: 0
immode set to: vid
mirror set to: False
loss set to: mask_nan_keep_loss
num_train_per_exp set to: None
augment_hue set to: False
augment_brightness set to: False
augment_hue_val set to: 0.05
augment_bright_val set to: 0.05
augment_rotation_val set to: 5
data_split_seed set to: None
valid_exp set to: None
dsmode set to: nn
debug set to: False
augment_shift set to: False
augment_zoom set to: False
augment_shear set to: False
augment_rotation set to: False
augment_shear_val set to: 5
augment_zoom_val set to: 0.05
augment_shift_val set to: 0.05
start_batch set to: 0
n_channels_in set to: None
extension set to: None
n_views set to: 6
vid_dir_flag set to: None
chunks set to: None
lockfirst set to: None
load_valid set to: None
drop_landmark set to: None
raw_im_h set to: None
raw_im_w set to: None
n_instances set to: 1
start_sample set to: 0
write_npy set to: None
use_npy set to: False
com_predict_weights set to: None
com_debug set to: None
com_exp set to: None
Using the following *dannce.mat files: .\20210610_091000_Label3D_dannce.mat
Setting vid_dir_flag to True.
Setting extension to .avi.
Setting chunks to {'Camera1': array([], dtype=float64), 'Camera2': array([], dtype=float64), 'Camera3': array([], dtype=float64), 'Camera4': array([], dtype=float64), 'Camera5': array([], dtype=float64)}.
Traceback (most recent call last):
File "C:\Users\realtime\anaconda3\envs\dannce\Scripts\com-train-script.py", line 33, in
sys.exit(load_entry_point('dannce', 'console_scripts', 'com-train')())
File "c:\users\realtime\dannce\dannce\cli.py", line 41, in com_train_cli
params = build_clarg_params(args, dannce_net=False, prediction=False)
File "c:\users\realtime\dannce\dannce\cli.py", line 87, in build_clarg_params
params = infer_params(params, dannce_net, prediction)
File "c:\users\realtime\dannce\dannce\engine\processing.py", line 126, in infer_params
camf = os.path.join(viddir, video_files[0])
IndexError: list index out of range

Would appreciate help getting started. The labeling was done only for the whole skeleton. As per my understanding I did not need to manually annotate COM in label3D - just to confirm.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.