Git Product home page Git Product logo

Comments (4)

zpbao avatar zpbao commented on June 27, 2024

Hi Matthias,

For the test videos, they are in a separate folder (https://drive.google.com/drive/folders/19NNo-EiTEXwFMFifugRakCGCxP6WcqB4?usp=drive_link) from the training data.

For the ignored videos, yes, we ignore them during our training as they contain some extreme lighting conditions or weathers (see our supplementary). The whole folder should contain 200 * 6 videos and the 924 videos are the ones we used. For the evaluation video, we just used it for visualization and tracking scores on wandb but did not test it.

Let me know if there are any other questions.

Best,
Zhipeng

from discovery_obj_move.

mtangemann avatar mtangemann commented on June 27, 2024

Hi Zhipeng,

Ah I missed the separate folder, thanks a lot for your fast reply.

As far as I see the test videos only come with rgb and masks, but not forward/backward optical flow and depth maps (which some of the models we would like to test need as input). Are optical flow and depth available and can you share them without much effort?

Thanks,
Matthias

from discovery_obj_move.

zpbao avatar zpbao commented on June 27, 2024

Hi Matthis,

I checked just now. Sadly, for the test videos, we only have ground-truth depth but lack GT flow... We have tested that some self-supervised flow, such as SMURF, can be an alternative solution if we require flow as the input. We also have camera matrix for each frame and in principle, the flow can be derived with camera matrix and depth (though it may be hard).

Do you still want me to share the depth with you? Some other annotations including camera matrix, 2D/3D bounding boxes, are also available. Just let me know.

Best,
Zhipeng

from discovery_obj_move.

mtangemann avatar mtangemann commented on June 27, 2024

Hi Zhipeng,

Thanks a lot for looking into it. I think the easiest option in my case is to split the training set for doing ablations. The dataset should be large enough for that.

I don't necessarily need the depth for the test videos then, thanks for your offer to share it. If you have the data anyway and if it's easy for you to upload it you might just add it, though. I can imagine that this data might be interesting for researchers working on other tasks (monocular depth estimation etc).

Thank you,
Matthias

from discovery_obj_move.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.