Git Product home page Git Product logo

Comments (15)

spoonsso avatar spoonsso commented on August 16, 2024

For our marmoset analyses we finetune the rat MAX weights. You can try finetuning from our marmoset weights (link here), but in our experience with mice, further finetuning an already-finetuned network doesn't always work as well as going from the rat pre-train.

You say you lack hand-labeled data, but you must have some you are using for finetuning, right? How many frames are you using in total? Can you send me your config file so that I can take a look at the other settings you are using?

Are you confident that your COM traces are clean? Plotting the COM is a start. But to really test it, you can visualize whether it is clean enough to allow you to correctly capture the animal inside the 3D volume by creating a new directory (my_new_dir) somewhere, and then setting debug_volume_tifdir: path_to_my_new_dir inside the io.yaml file. Then run dannce-train. Rather than training, it will instead save all of your training input volumes into .tif files that you can visualize with ImageJ and make sure the animal is completely captured inside.

Another thing to keep an eye out for is errors in calibration and frame synchronization across cameras (we still run into this problem when setting up new systems). If you use Label3D, you can check for these issues by labeling a few body parts and making sure the triangulated+reprojected points (press 't' in Label3D after labeling the points) don't deviate from where you labeled them.

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

Actually I'm a student doing research in Dr. Cirong LIU's lab, the Institute of Neuroscience, Shanghai. The config files have been attached in an email sent by him yesterday evening, at about 7:46.

Thanks for your advice! We will debug the COM traces ASAP.

The calibrations are not that precise, but generally good, with maximum error of about 1.5cm.

The frames are syncronized with maximum error of 20ms. Since the aqusition fps is 30, I consider this to be acceptable.

About the problem of 'new_n_channels_out', I'll further explain it in an email.

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

Hello, we tried debug the COM traces volume, and found that the volumes debug seem strange. 1 of the 3 cams output normal pictures, while the other 2 cams give pictures as if they have been stretched. Just like the following ones.
0_15667_cam0

0_15667_cam1

0_15667_cam2

And as I change the vol, the extent to which they are stretched seems different. While the third camera always output a fine picture.

Can you help explain this? Or to say, it is somehow caused by wrong calibration?

By the way, when I use View3D to view the com labels, they all seemed fine. Does this mean that the COM's 3d coordinates are correct?

from dannce.

spoonsso avatar spoonsso commented on August 16, 2024

The volumes can look strange due to normal effects of lens distortion and the angle of view and the place in the image you are sampling from. But these do look a tad weird. If you scroll through the different slices in the stretched views, do you ever see something that looks like the animal? Also, for camera 3 -- are those corners the top of the arena and is the brown rectangle the arena floor? If so, how big is the arena? (edit: you could actually see the top & bottom of the arena in the volume if using a top-down view)

re: View3D. By com labels, do you mean the predicted COMs (the output from com-predict)? If so, then yes, they would be correct.

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

For the com labels, I mean the predicted ones.

Sorry I didn't note that, I increase the 600mm box because in 600mm mode, I only see the tail of the marmoset. However,if the COMs are right, there's no possibility that 600mm box didn't cover the whole animal, so I still consider it strange.

0_29001_cam0

0_29001_cam1

0_29001_cam2

The first two camera still gives incorrect pictures.

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

https://drive.google.com/file/d/11FM-ovqG5ksgY6Rkmuy5SdVIQSVsSEpt/view?usp=sharing

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

The volumes are available from this link.

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

image
And this is what I see when I view my labels.

Is there any possibility that in label3d it reproject 3d points well, while in dannce-train it give a wrong reprojection?
Do they have different standards?

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

We checked the calibrations, and confirm that the extrinsics are correct, using unit "mm". However, we can't confirm the intrinsics. I tried using MATLAB's calibration app to produce intrinsics.It gives data which differs from the ones produced by multi-camera-calibration(developed by your team) a little bit.But I consider it generally ok.

I'll conclude my question here:
Q1: Do Label3d show undistorted frames(using camera intrinsics), or display the original frames from videos?

Q2: How do dannce-train crop frames according to volumes and com3d? Do they reproject the 3d labels to 2d images using the camera params, and then crop a quadrilateral area of the image whose center is the label?
If so, are there any other problems that may cause the frames to be wrongly croped?

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

/home/xuchun/dannce/dannce/engine/generator.py:487: UserWarning: Note: ignoring dimension mismatch in 3D labels
warnings.warn(msg)

Also, does this mean a critical error?

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

Q3: I notice that when I reduce the vol_size, the results looks better.
When I set vol_size to 100(of course the marmoset won't be covered fully), the output images are all centered around COM of the animal. The main body covers most of the picture.
When I increase it to 300, the center of the picture began to shift. And when it is 600, the picture and file I sent has showed what it appears to be.
How could this happen? If the COM label is wrong, why decreasing it didn't move the marmoset completely out of the view, but get it inside?
Thanks for helping!

The debug vols of vol_size:100 are as follows.

0_16301_cam0
0_16301_cam1
0_16301_cam2

from dannce.

spoonsso avatar spoonsso commented on August 16, 2024

I just checked one set of your volumes with ImageJ's 3D viewer, and there is reasonable convergence of matching body features in 3D space, so I think they are probably fine.

I suggest trying a few things:

  1. based on the config files you sent me, it looks like you are finetuning in "AVG" mode. For our marmoset analyses, we finetuned in "MAX" mode. Also, for our marmoset analyses, we finetuned from a MAX network pretrained on Rat 7M -- are you using the pretrained 3 cam MAX weights that I linked to?
  2. As I explained in my e-mail, we have never tried training DANNCE using just a single landmark, and we believe using multiple landmarks is an important source of information during training. If you don't want to label the full pose right now, why don't you try combining your single head labels with our labeled marmoset data that I sent you?

from dannce.

spoonsso avatar spoonsso commented on August 16, 2024

Also, I do agree the volume is a bit big for your animal, especially if you are just trying to get the head (and not also the tip of the tail). Another thing to try would be reducing your vol_size to 400 mm.

from dannce.

Spartan859 avatar Spartan859 commented on August 16, 2024

based on the config files you sent me, it looks like you are finetuning in "AVG" mode. For our marmoset analyses, we finetuned in "MAX" mode. Also, for our marmoset analyses, we finetuned from a MAX network pretrained on Rat 7M -- are you using the pretrained 3 cam MAX weights that I linked to?

A1: Yes, we use AVG mode, and use the pretrained 3 cam MAX weights. Actually we have tried MAX mode. The loss is really small, but on prediction it still gives an unacceptable result. So I suggest that maybe the main problem is single-landmark tracking.

As I explained in my e-mail, we have never tried training DANNCE using just a single landmark, and we believe using multiple landmarks is an important source of information during training. If you don't want to label the full pose right now, why don't you try combining your single head labels with our labeled marmoset data that I sent you?

A2: Thanks for advice and we'll test multi labels soon. I'll tell you whether it works fine.

Thank you for helping us these days!

from dannce.

spoonsso avatar spoonsso commented on August 16, 2024

No problem, please keep me posted. Happy to help.

from dannce.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.