Git Product home page Git Product logo

Comments (13)

xksteven avatar xksteven commented on August 23, 2024

Hello

We did indeed train two different models one for each training data. We offer the weights for the pretrained model in google drive link from the README for the BDD dataset.

from anomaly-seg.

matteosodano avatar matteosodano commented on August 23, 2024

Ah, perfect. Thanks!

One further question: which model corresponds to the pretrained weights you shared? Is the code available in the semantic-segmentation submodule?

from anomaly-seg.

xksteven avatar xksteven commented on August 23, 2024

The model shared was for the BDD dataset.
Yup the code we used is directly linked to the commit from the semantic-segmentation module. We worked to update our codebase to run with a newer versions of the submodule but no longer have time to keep in sync with any changes made to the submodule.

from anomaly-seg.

matteosodano avatar matteosodano commented on August 23, 2024

Yep, I meant which encoder and which decoder are the ones you used for those weights? I saw there are many available in the submodule, and the file names of the .pth don't point to one in particular

from anomaly-seg.

xksteven avatar xksteven commented on August 23, 2024

We do list it in the paper which architecture we used: ResNet101dilated + PPM_deepsup

from anomaly-seg.

matteosodano avatar matteosodano commented on August 23, 2024

Right. Thanks again!

from anomaly-seg.

matteosodano avatar matteosodano commented on August 23, 2024

Hi, sorry but I have another question and can't seem to figure it out.

On the paper, you write
We source BDD-Anomaly from BDD100K. [...] There are 18 original classes. We choose motorcycle, train, and bicycle as anomalous objects classes and remove all images with these objects from the training and validation sets.

The original classes should be 19. But anyways, this should lead to a network that segments 15 classes (or 16 if we consider 19 original ones). However, the pretrained weights you shared in the GDrive folder are for a RN101D+PPMDS that segments only 13 classes:

RuntimeError: Error(s) in loading state_dict for PPMDeepsup:
	size mismatch for conv_last.4.weight: copying a param with shape torch.Size([13, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 512, 1, 1]).
	size mismatch for conv_last.4.bias: copying a param with shape torch.Size([13]) from checkpoint, the shape in current model is torch.Size([16]).
	size mismatch for conv_last_deepsup.weight: copying a param with shape torch.Size([13, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 512, 1, 1]).
	size mismatch for conv_last_deepsup.bias: copying a param with shape torch.Size([13]) from checkpoint, the shape in current model is torch.Size([16]).

Did I get something wrong? Which classes are you using?

Thanks again for the prompt support!

from anomaly-seg.

xksteven avatar xksteven commented on August 23, 2024

I'll double check the model weights. As maybe we had uploaded the weights for CAOS model. If so I'll double check if we have the pretrained weights for BDD still.

The code is currently set up to run with the defaults for CAOS dataset so you might have to change a few things from the defaults.py file.

Also you're correct about the typo it's 19 classes, 20 if you include the unlabeled pixels.

from anomaly-seg.

xksteven avatar xksteven commented on August 23, 2024

Okay I just confirmed. I was previously incorrect and the model uploaded was for the CAOS dataset which is why it's of size 13.

from anomaly-seg.

xksteven avatar xksteven commented on August 23, 2024

I can upload the BDD model weights too and reupload the model_weights folder to make it more clear which weights are for which model.

Unfortunately I cannot confirm if the old BDD weights I have saved are currently from a resnet50 or resnet101 so if I upload it could you test it out for me?

from anomaly-seg.

matteosodano avatar matteosodano commented on August 23, 2024

Sure, I can give it a look. Let me know when you upload them.

from anomaly-seg.

xksteven avatar xksteven commented on August 23, 2024

Updated the model weights link posted here as well: https://drive.google.com/file/d/1HIQAhX8WIokZpymslUPDmpbUWdYEKEQ3/view?usp=share_link

from anomaly-seg.

matteosodano avatar matteosodano commented on August 23, 2024

Model weights for BDD are also referred to ResNet101dilated encoder with PPM deepsup decoder!
Thanks for sharing them.

from anomaly-seg.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.