Comments (15)
For our marmoset analyses we finetune the rat MAX weights. You can try finetuning from our marmoset weights (link here), but in our experience with mice, further finetuning an already-finetuned network doesn't always work as well as going from the rat pre-train.
You say you lack hand-labeled data, but you must have some you are using for finetuning, right? How many frames are you using in total? Can you send me your config file so that I can take a look at the other settings you are using?
Are you confident that your COM traces are clean? Plotting the COM is a start. But to really test it, you can visualize whether it is clean enough to allow you to correctly capture the animal inside the 3D volume by creating a new directory (my_new_dir
) somewhere, and then setting debug_volume_tifdir: path_to_my_new_dir
inside the io.yaml
file. Then run dannce-train
. Rather than training, it will instead save all of your training input volumes into .tif files that you can visualize with ImageJ and make sure the animal is completely captured inside.
Another thing to keep an eye out for is errors in calibration and frame synchronization across cameras (we still run into this problem when setting up new systems). If you use Label3D, you can check for these issues by labeling a few body parts and making sure the triangulated+reprojected points (press 't' in Label3D after labeling the points) don't deviate from where you labeled them.
from dannce.
Actually I'm a student doing research in Dr. Cirong LIU's lab, the Institute of Neuroscience, Shanghai. The config files have been attached in an email sent by him yesterday evening, at about 7:46.
Thanks for your advice! We will debug the COM traces ASAP.
The calibrations are not that precise, but generally good, with maximum error of about 1.5cm.
The frames are syncronized with maximum error of 20ms. Since the aqusition fps is 30, I consider this to be acceptable.
About the problem of 'new_n_channels_out', I'll further explain it in an email.
from dannce.
Hello, we tried debug the COM traces volume, and found that the volumes debug seem strange. 1 of the 3 cams output normal pictures, while the other 2 cams give pictures as if they have been stretched. Just like the following ones.
And as I change the vol, the extent to which they are stretched seems different. While the third camera always output a fine picture.
Can you help explain this? Or to say, it is somehow caused by wrong calibration?
By the way, when I use View3D to view the com labels, they all seemed fine. Does this mean that the COM's 3d coordinates are correct?
from dannce.
The volumes can look strange due to normal effects of lens distortion and the angle of view and the place in the image you are sampling from. But these do look a tad weird. If you scroll through the different slices in the stretched views, do you ever see something that looks like the animal? Also, for camera 3 -- are those corners the top of the arena and is the brown rectangle the arena floor? If so, how big is the arena? (edit: you could actually see the top & bottom of the arena in the volume if using a top-down view)
re: View3D. By com labels, do you mean the predicted COMs (the output from com-predict)? If so, then yes, they would be correct.
from dannce.
For the com labels, I mean the predicted ones.
Sorry I didn't note that, I increase the 600mm box because in 600mm mode, I only see the tail of the marmoset. However,if the COMs are right, there's no possibility that 600mm box didn't cover the whole animal, so I still consider it strange.
The first two camera still gives incorrect pictures.
from dannce.
https://drive.google.com/file/d/11FM-ovqG5ksgY6Rkmuy5SdVIQSVsSEpt/view?usp=sharing
from dannce.
The volumes are available from this link.
from dannce.
And this is what I see when I view my labels.
Is there any possibility that in label3d it reproject 3d points well, while in dannce-train it give a wrong reprojection?
Do they have different standards?
from dannce.
We checked the calibrations, and confirm that the extrinsics are correct, using unit "mm". However, we can't confirm the intrinsics. I tried using MATLAB's calibration app to produce intrinsics.It gives data which differs from the ones produced by multi-camera-calibration(developed by your team) a little bit.But I consider it generally ok.
I'll conclude my question here:
Q1: Do Label3d show undistorted frames(using camera intrinsics), or display the original frames from videos?
Q2: How do dannce-train crop frames according to volumes and com3d? Do they reproject the 3d labels to 2d images using the camera params, and then crop a quadrilateral area of the image whose center is the label?
If so, are there any other problems that may cause the frames to be wrongly croped?
from dannce.
/home/xuchun/dannce/dannce/engine/generator.py:487: UserWarning: Note: ignoring dimension mismatch in 3D labels
warnings.warn(msg)
Also, does this mean a critical error?
from dannce.
Q3: I notice that when I reduce the vol_size, the results looks better.
When I set vol_size to 100(of course the marmoset won't be covered fully), the output images are all centered around COM of the animal. The main body covers most of the picture.
When I increase it to 300, the center of the picture began to shift. And when it is 600, the picture and file I sent has showed what it appears to be.
How could this happen? If the COM label is wrong, why decreasing it didn't move the marmoset completely out of the view, but get it inside?
Thanks for helping!
The debug vols of vol_size:100 are as follows.
from dannce.
I just checked one set of your volumes with ImageJ's 3D viewer, and there is reasonable convergence of matching body features in 3D space, so I think they are probably fine.
I suggest trying a few things:
- based on the config files you sent me, it looks like you are finetuning in "AVG" mode. For our marmoset analyses, we finetuned in "MAX" mode. Also, for our marmoset analyses, we finetuned from a MAX network pretrained on Rat 7M -- are you using the pretrained 3 cam MAX weights that I linked to?
- As I explained in my e-mail, we have never tried training DANNCE using just a single landmark, and we believe using multiple landmarks is an important source of information during training. If you don't want to label the full pose right now, why don't you try combining your single head labels with our labeled marmoset data that I sent you?
from dannce.
Also, I do agree the volume is a bit big for your animal, especially if you are just trying to get the head (and not also the tip of the tail). Another thing to try would be reducing your vol_size to 400 mm.
from dannce.
based on the config files you sent me, it looks like you are finetuning in "AVG" mode. For our marmoset analyses, we finetuned in "MAX" mode. Also, for our marmoset analyses, we finetuned from a MAX network pretrained on Rat 7M -- are you using the pretrained 3 cam MAX weights that I linked to?
A1: Yes, we use AVG mode, and use the pretrained 3 cam MAX weights. Actually we have tried MAX mode. The loss is really small, but on prediction it still gives an unacceptable result. So I suggest that maybe the main problem is single-landmark tracking.
As I explained in my e-mail, we have never tried training DANNCE using just a single landmark, and we believe using multiple landmarks is an important source of information during training. If you don't want to label the full pose right now, why don't you try combining your single head labels with our labeled marmoset data that I sent you?
A2: Thanks for advice and we'll test multi labels soon. I'll tell you whether it works fine.
Thank you for helping us these days!
from dannce.
No problem, please keep me posted. Happy to help.
from dannce.
Related Issues (20)
- COM prediction values are NaN HOT 3
- Could not load weights for finetune (likely because you are finetuning a previously finetuned network). Attempting to finetune from a full finetune model file. HOT 18
- Zero training/validation errors but completely wrong in labeled images. HOT 2
- How to train DANNCE with more than 6 cameras? HOT 1
- COM deviate a lot from animal HOT 2
- When running dannce-predict demo script, GPU usage is at 0% HOT 1
- Integration of DANNCE and CAPTURE HOT 1
- Could not find enough inliers in imagePoints and worldPoints HOT 2
- n_views error HOT 1
- dannce-predict loss very small, but result same like normal but shift HOT 6
- Re-train network with new labeled frames HOT 1
- Multi animal COM HOT 2
- how to use rats16.mat skeleton for CAPTURE_demo analysis HOT 1
- calibration HOT 1
- OOM error HOT 2
- File "E:\anaconda\envs\tfnew_25\lib\site-packages\tensorflow\python\framework\ops.py", line 6649, in __init__ raise ValueError("name for name_scope must be a string.") HOT 1
- ValueError: name for name_scope must be a string when doing dannce-predict. HOT 1
- ValueError: bad marshal data (unknown type code) when dannce-predict HOT 1
- frames_with_good_tracking
- Fintune with more than 6 cams HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dannce.