Git Product home page Git Product logo

caa's Introduction

CAA

This is the Pytorch implementation of the AAAI2021 paper Composite Adversarial Attacks

Pre-requisites

  • torch >= 1.3.0

  • torchvision

  • advertorch

  • tqdm

  • pillow

  • imagenet_c (Only for unrestricted adversarial attacks)

Usage

Linf Attack

Collect adv. training models and place it to checkpoints. Then run

python test_attacker.py --batch_size 512 --dataset cifar10 --net_type madry_adv_resnet50 --norm linf

Unrestricted Attack

The code is only for test on bird_or_bicycle datasets, also you can adapt it to your own datasets and tasks. For those who want to run on bird_or_bicycle datasets, you must first install bird_or_bicycle:

git clone https://github.com/google/unrestricted-adversarial-examples
pip install -e unrestricted-adversarial-examples/bird-or-bicycle

bird-or-bicycle-download

Then collect unrestricted defense models in Unrestricted Adversarial Examples Challenge, such as TRADESv2, and place it to checkpoints. Finally, run

python test_attacker.py --batch_size 12 --dataset bird_or_bicycle --net_type ResNet50Pre --norm unrestricted

L2 Attack

Collect adv. training models and place it to checkpoints. Then run

python test_attacker.py --batch_size 384 --dataset cifar10 --net_type madry_adv_resnet50_l2 --norm l2 --max_epsilon 0.5

For this implementation, the codes are not passing a fully test for L2 Attack due to the much long elapsed time, so there may exists some bugs in L2 attack case

Define a custom attack

Your can define arbitrary attack policies by giving a list of attacker dict:

For example, if you want to compose MultiTargetedAttack, MultiTargetedAttack and CWLinf_Attack_adaptive_stepsize, just define the list like follows:

[{'attacker': 'MultiTargetedAttack', 'magnitude': 8/255, 'step': 50}, {'attacker': 'MultiTargetedAttack', 'magnitude': 8/255, 'step': 25}, {'attacker': 'CWLinf_Attack_adaptive_stepsize', 'magnitude': 8/255, 'step': 125}]

CAA also supports single attacker without any combination: when you give a list like [{'attacker': 'ODI_Step_stepsize', 'magnitude': 8/255, 'step': 150}], it means you are runing a single ODI-PGD attack.

Now this code supports attackers as follows:

  • GradientSignAttack
  • PGD_Attack_adaptive_stepsize
  • MI_Attack_adaptive_stepsize
  • CWLinf_Attack_adaptive_stepsize
  • MultiTargetedAttack
  • ODI_Cos_stepsize
  • ODI_Cyclical_stepsize
  • ODI_Step_stepsize
  • CWL2Attack
  • DDNL2Attack
  • GaussianBlurAttack
  • GaussianNoiseAttack
  • ContrastAttack
  • SaturateAttack
  • ElasticTransformAttack
  • JpegCompressionAttack
  • ShotNoiseAttack
  • ImpulseNoiseAttack
  • DefocusBlurAttack
  • GlassBlurAttack
  • MotionBlurAttack
  • ZoomBlurAttack
  • FogAttack
  • BrightnessAttack
  • PixelateAttack
  • SpeckleNoiseAttack
  • SpatterAttack
  • SPSAAttack
  • SpatialAttack

Warning

The codes have not been carefully arranged and still been messy now. If you meet any bugs, feel free to report in issues.

Citations

@article{brown2018unrestricted,
  title={Unrestricted adversarial examples},
  author={Brown, Tom B and Carlini, Nicholas and Zhang, Chiyuan and Olsson, Catherine and Christiano, Paul and Goodfellow, Ian},
  journal={arXiv preprint arXiv:1809.08352},
  year={2018}
}
@article{ding2019advertorch,
  title={AdverTorch v0. 1: An adversarial robustness toolbox based on pytorch},
  author={Ding, Gavin Weiguang and Wang, Luyu and Jin, Xiaomeng},
  journal={arXiv preprint arXiv:1902.07623},
  year={2019}
}
@article{rauber2017foolbox,
  title={Foolbox: A python toolbox to benchmark the robustness of machine learning models},
  author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias},
  journal={arXiv preprint arXiv:1707.04131},
  year={2017}
}
@article{papernot2016technical,
  title={Technical report on the cleverhans v2. 1.0 adversarial examples library},
  author={Papernot, Nicolas and Faghri, Fartash and Carlini, Nicholas and Goodfellow, Ian and Feinman, Reuben and Kurakin, Alexey and Xie, Cihang and Sharma, Yash and Brown, Tom and Roy, Aurko and others},
  journal={arXiv preprint arXiv:1610.00768},
  year={2016}
}
@article{croce2020reliable,
  title={Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks},
  author={Croce, Francesco and Hein, Matthias},
  journal={arXiv preprint arXiv:2003.01690},
  year={2020}
}
@article{tashiro2020output,
  title={Output Diversified Initialization for Adversarial Attacks},
  author={Tashiro, Yusuke and Song, Yang and Ermon, Stefano},
  journal={arXiv preprint arXiv:2003.06878},
  year={2020}
}

caa's People

Contributors

ifififa avatar vtddggg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

caa's Issues

Could you please tell me the gpu edition used in your CAA search?

Hi, I found there is no NAGA-II search code relased, so I reimplemented by myself. I followed the direction in your paper, used pymoo, set the search paramater as in your paper, and used 3 2080Ti GPU. But I found that my search time consumption on CIFAR-10 (more than 10 days on 3GPU) is much more than that the search time reported in your paper (3GPU/d). Did you mean by 3GPU/d that you searched on 3GPU and finished in 1 day? Is that because your gpu is much faster than mine? Could you please tell me the gpu edition you used in your CAA search?

Suspected bug about best permutation return

In the paper, the authors pointed out that they found the best permutation for subsequent iterations.
And in the implementation, the authors used part of the Auto-Attack code.
But in fact, the key step of ‘find the best permutation’ seems to be missing from the implementation.

Using MultiTargetedAttack as an example. In line 1365 of attack_ops.py, for function run_once, the code only returns x_best_adv, instead of returning x_best like Auto-Attack.
After searching for the assignment operation of x_best_adv, it is not difficult to find that this will cause only random noise to be returned for those examples that fail the attack.
The same kind of errors will also occur on the return of now_p.

Due to the large amount of code, I cannot be sure this is a bug and whether other techniques have been used elsewhere to solve this problem. If my understanding is wrong, please point it out. Thanks.

Questions about complexity counting

In the paper, authors mentioned the complexity of CAA is 800, when using [('MT-LinfAttack', =8/255, t=50), ('MT-LinfAttack', =8/255, t=25), ( 'CWLinfAttack', =8/255, t=125)].

I think the calculation formula is: 50 * 9 + 25 * 9 + 125 = 800

But in the code, when the attack method idx is not zero, you use:

ori_adv_images, _ = apply_attacker(test_images, attack_name, test_labels, model, attack_eps, None, int(attack_steps), args.max_epsilon, _type=args.norm, gpu_idx=0, target=target_label)
adv_adv_images, p = apply_attacker(subpolicy_out_dict[idx-1], attack_name, test_labels, model, attack_eps, previous_p, int(attack_steps), args.max_epsilon, _type=args.norm, gpu_idx=0, target=target_label)

which means that in subsequent attacks, the same kind of attack is executed twice, so why the correct complexity is not:
50 * 9 + 25 * 9 * 2 + 125 * 2 = 1150?

Question about the CAA code

Hello, I am very interested in your work. May I ask a question about the test_attacker.py? When the idx does not equal to zero, the original test images are needed to fed to the subsequent attack?

ori_adv_images, _ = apply_attacker(test_images, attack_name, test_labels, model, attack_eps, None, int(attack_steps), args.max_epsilon, _type=args.norm, gpu_idx=0, target=target_label)
adv_adv_images, p = apply_attacker(subpolicy_out_dict[idx-1], attack_name, test_labels, model, attack_eps, previous_p, int(attack_steps), args.max_epsilon, _type=args.norm, gpu_idx=0, target=target_label)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.