Comments (4)
Hi Matthias,
For the test videos, they are in a separate folder (https://drive.google.com/drive/folders/19NNo-EiTEXwFMFifugRakCGCxP6WcqB4?usp=drive_link) from the training data.
For the ignored videos, yes, we ignore them during our training as they contain some extreme lighting conditions or weathers (see our supplementary). The whole folder should contain 200 * 6 videos and the 924 videos are the ones we used. For the evaluation video, we just used it for visualization and tracking scores on wandb but did not test it.
Let me know if there are any other questions.
Best,
Zhipeng
from discovery_obj_move.
Hi Zhipeng,
Ah I missed the separate folder, thanks a lot for your fast reply.
As far as I see the test videos only come with rgb and masks, but not forward/backward optical flow and depth maps (which some of the models we would like to test need as input). Are optical flow and depth available and can you share them without much effort?
Thanks,
Matthias
from discovery_obj_move.
Hi Matthis,
I checked just now. Sadly, for the test videos, we only have ground-truth depth but lack GT flow... We have tested that some self-supervised flow, such as SMURF, can be an alternative solution if we require flow as the input. We also have camera matrix for each frame and in principle, the flow can be derived with camera matrix and depth (though it may be hard).
Do you still want me to share the depth with you? Some other annotations including camera matrix, 2D/3D bounding boxes, are also available. Just let me know.
Best,
Zhipeng
from discovery_obj_move.
Hi Zhipeng,
Thanks a lot for looking into it. I think the easiest option in my case is to split the training set for doing ablations. The dataset should be large enough for that.
I don't necessarily need the depth for the test videos then, thanks for your offer to share it. If you have the data anyway and if it's easy for you to upload it you might just add it, though. I can imagine that this data might be interesting for researchers working on other tasks (monocular depth estimation etc).
Thank you,
Matthias
from discovery_obj_move.
Related Issues (11)
- The details about loss? HOT 2
- Missing additional data? HOT 3
- Training Time & Pretrained model HOT 2
- Faster segmentation processing in dataloader HOT 1
- Model parameters for CATER dataset
- Evaluation code HOT 1
- Truncated archives (TRI-PD dataset) HOT 8
- About pretrained models? HOT 1
- really unsupervise? HOT 2
- How to obtain the instance-level motion segmentation mask for each moving object? HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from discovery_obj_move.