pokaxpoka / deep_mahalanobis_detector Goto Github PK
View Code? Open in Web Editor NEWCode for the paper "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks".
Code for the paper "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks".
In the paper you mention you validate the hyperparameters for the input processing (the FGSM magnitude) and the feature ensemble using adversarial samples (right part of Table 2 in the paper). I think this validation makes more sense than validation using OOD samples, since as you say these samples are often inaccessible a priori.
I cannot seem to find the part in the code for this validation, and was just wondering specifically how you validate the FGSM magnitude when you use the adversarial samples, since the in-distribution samples will also be preprocessed with FGSM in the same way as the adversarial samples, correct? Then I guess the only difference between in-dist and adv samples is that the adv samples are processed with one extra FGSM optimization step?
If you could clarify or point me to the code section, that would be great.
BTW nice work!
While reading the paper I struggle to understand the following:
how to compute a AUROC score using the M(x) distance score? If the ground truth is 1, for in-distribution, and 0, for out-of-distribution, how to compute a AUROC if M(x) is e.g. - 639.2 (i.e. not a distribution)?
Thanks for your help!
Would you consider adding the scripts you used to train the Resnet and Densenet models?
Can you explain for can we implement the Mahalanobis in semantic segmentation for OOD detection?
Could you please update them?
You merged the in-distribution and out-of-distribution test set and split out new train/val/test set for LR based on Mahalanobis score. However, you don't do it in the same way for ODIN and temperature scaling. Is that fair? At least, I suppose you can use the same subset to report and compare AUC.
Im trying to use this repository to detect which adversarial examples have been detected, is this possible, im struggling?
Dear author,
May I know how you select the random_noise_size based on different network architecture and dataset?
I'm going to use another dataset, how should I set the random_noise_size, min_pixel and max_pixel?
Thank you very much!
YH
Should we assume that this repository copyright its hold by authors or authors kindly released it under some open source license like MIT ?
best regards
Hi, many thanks for your work.
Recently, we want to compare our proposed method with this work.
In our settings, the model has no access to the ood data,
therefore, can we train a One-Class logistic regressor with your proposed code?
looking forward to your reply, and many thanks for your creative works.
hi, the links of pretrained model u posted are invalid([ResNet on CIFAR-10]/ [ResNet on CIFAR-100] / [ResNet on SVHN])
can i trouble u to update it
Could you please add information to the readme about how to interpret the output files? It's not clear what the values in the numpy arrays refer to.
Wrong repository
As per the formulae given in the paper
which is equalivalent to calculate the covaraince matrix for each class and then take the weighted average to get the tied covariance matrix. But in the code,
deep_Mahalanobis_detector/lib_generation.py
Lines 107 to 120 in 90c2105
you are using sklearn.covariance.EmpiricalCovariance
for all of the data (see line 117 X
) but as per formulae you calculate the covariance for each class and then take the average. So I feel that we should apply sklearn.covariance.EmpiricalCovariance
per class and then take the sum.
Thanks,
Hi, thanks for you amazing work.
In the source code, I find that this work uses the ood data for training a logistic regression model.
I wonder if this is fair for comparision with other method such as softmax baseline and ODIN.
Looking forward to your reply!! Many Thanks.
Dear author,
may I ask what is the use of the torch idex copy method at the lib_generation.py file in lines 171 to 179?
Thank you in advance.
Thank you for open sourcing this great project. The way you saved the Cifar10/100 checkpoints makes it hard to reuse the pre-trained Densenets in another project. Could you please release the checkpoints that can be loaded like
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.