Git Product home page Git Product logo

ovir-3d's People

Contributors

changhaonan avatar shiyoung77 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ovir-3d's Issues

Evaluation code for ycb_video dataset

Hi, thank you so much for releasing your evaluation code for scanet200, could you release your evaluation code for ycb_video? Because the scanet200 dataset costs too much time to download.

Evaluation code

hi, thank you so much for releasing your evaluation code!

I am running your evaluation code, but I seem to be receiving nan values. Any idea why this might be happening?

image

I am using a subset of the entire validation set. So perhaps that might be leading to the issue?

Instance GT for Scannet200

Hi, could you provide some guidance on how you generated the instance groundtruth labels for each point in the pointcloud in your scannet200 dataset? I am trying to do the same for the train dataset.

I notice that there are labels located in the aggregation.json file that comes with every scanned scene in scannet200. Can I simply take these for the GT instance labels?

Thank you so much ;))

Files for scannet dataset evaluation

Great project! I am trying to run your codebase on the scannet 200 dataset. I notice that there is a config.json file, but it does not seem to come with the scannet dataset. How should I generate this config.json file for scannet200 dataset?

Additionally, some files required for evaluating on scannet200 in readme seems to be missing. May I know when will they be released / how do I get access to them?

Thank you so much ;))

Empty mask error

Hi, I notice that the current algorithm throws an error when it receives a frame of Detic prediction that is nothing.

Traceback (most recent call last): File "/mnt/src/proposed_fusion.py", line 633, in <module> main() File "/mnt/src/proposed_fusion.py", line 599, in main instance_pt_count, instance_features, instance_detections = instance_fusion( File "/mnt/src/proposed_fusion.py", line 283, in instance_fusion pred_masks = resolve_overlapping_masks(pred_masks, pred_scores, device=device) File "/mnt/src/proposed_fusion.py", line 233, in resolve_overlapping_masks indices = ((scores == torch.max(scores, dim=0, keepdim=True).values) & pred_masks).nonzero() IndexError: max(): Expected reduction dim 0 to have non-zero size.

How should I resolve this bug? I was thinking of simply skipping this iteration if mask dimension is 0, but I am not sure if it will mess up the rest of the algorithm. Thanks!

Question about the qualitative results on ScanNet200 dataset

Hi @shiyoung77,

Thank you for your amazing work and the published code. I've encountered an issue while using your method on ScanNet200. Specifically, when visualizing all the generated instances in a scene (scene0011_00 in the validation split), the result appears as follows:
image
I've observed that many objects seem to be fragmented into multiple parts, which is considerably worse than in your paper. I'm wondering if there might be something missing in the implementation.

Thank you and looking forward to your reply.

Instance Representation

Hi, just wanted to confirm my understanding, but it seems like the 3d instance segmentations having overlapping points? Is this meant to be the case? i.e each point in the point cloud can be assigned to multiple instance segmentations?

Understanding Filter by Instance Size

Hi, thank you so much for updating your codebase and fixing the issues!

I wanted to ask about the recent change that led to the drastic improvement.
Besides changing the depth threshold for occlusion detection, the filtering by size is changed from a fixed value to using the median. Does this mean that roughly half the instances are removed each filtering round?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.