p2333 / adaptive-diversity-promoting Goto Github PK
View Code? Open in Web Editor NEWAdversarial Defense for Ensemble Models (ICML 2019)
License: Apache License 2.0
Adversarial Defense for Ensemble Models (ICML 2019)
License: Apache License 2.0
Hi, thanks for making the code public. I'm trying to reproduce your experiments with Pytorch. However, I find that there is some mismatch between the hyperparameters specified in your code and those mentioned in the paper. For example, in the paper, the training of CIFAR-10 took 180 epochs, while in the code it will take 200 epochs and the learning rate schedule is also set accordingly (with milestones at 100, 150). I just want to have a confirmation from you on this. Thanks!
Hi, thanks for releasing the code. I have read your paper and have some questions regarding the selection of hyperparameter alpha. In Section 4.2, it says alpha=2 is chosen according to Eq. (7). However, by my calculation, let K=3 and F_y=0.9, alpha should be around 0.76 when L=10 and around 0.49 when L=100. Also it seems that the paper did not mention why beta is chosen as 0.5. Could you please explain how the hyperparameters alpha and beta are selected in your experiments? Thanks.
Could anyone inform me how to run the code in the latest versions (Python3.6, Tensorflow 2, and Cleverhans v.3.0.1)?
I have tried running it with the latest versions; however, I got many issues.
What is the advice/best practice in reproducing this experiment?
Hi, may I know the arguments for the t-sne fucntion(perplexity,early_exaggeration,learning_rate, n_iter, init, ...) for Figure.2 in the paper?
I tried quite a few setups, but none of them gave out nice clustering.
Thank you!
Thank you for publishing your code.
According to section 4.3 and the given code, the Carlini & Wagner (C&W) attack was carried out with 1000 iterations, a learning rate of 0.01, a binary_search_steps of 1 and various confidences.
The code does not include the C&W for MNIST, when I execute a C&W on MNIST using a vanilla ResNet-56 the accuracy on adversarial examples is much higher than reported. Are these parameters used for MNIST C&W? if not can you share the C&W attack for MNIST?
Thanks
in the code given by the author advtrain_cifar.py, the code set label_smooth=FLAGS.label_smooth at line140, but I don't know the value? Can you tell me the value?
But in paper the error rate is just 0.28, which means I should get 99.7%.
I didn't find the default value for the label_smooth which are used in function '_Loss_withEE_DPP', and why do you need this smoother for the cross entropy loss function? Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.