Git Product home page Git Product logo

metaattack_iclr2020's Introduction

Query-efficient Meta Aattack to Deep Neural Networks

This repository contains the code for reproducing the experimental results of attacking mnist, cifar10, tiny-imagenet models, of our submission: Query-efficient Meta Aattack to Deep Neural Networks (openreview). The paper can be cited as follows:

@inproceedings{
Du2020Query-efficient,
title={Query-efficient Meta Attack to Deep Neural Networks},
author={Jiawei Du and Hu Zhang and Joey Tianyi Zhou and Yi Yang and Jiashi Feng},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=Skxd6gSYDS}
}

Reproducing the results (take cifar10 for example)

Requirements

  • Pytorch (torch, torchvision) packages

For generating gradients for meta model training

cd gen_grad/cifar_gen_grad_for_meta

python cifar_main.py

For training meta model to attack

cd meta_training/cifar_meta_training

python cifar_train.py

For query-efficient attack

The results can be reproduced (with the default hyperparameters) with the following command:

cd meta_attack/meta_attack_cifar

python test_all.py

The models we used for meta attacker training and final attack can be found here.

metaattack_iclr2020's People

Contributors

angusdu avatar dydjw9 avatar xiaofanustc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

metaattack_iclr2020's Issues

How to set the maximum queries threshold of attack?

Helllo, can you help me for set a maximum query number threshold of the meta attack?
Because I am written a paper which deadline is soon (ECCV).
But all the compared methods (including yours) must set a maximum query number = 10000 to be compared fairly, otherwise your method may use too many queries/iterations to attack successfully on one image.
I notice that you use two arguments to control the loop number in attack.py, which are --maxiter and --binary_steps, and I also notice that your query number is calculated weird avg_qry += (first_step-1)//args.finetune_interval*args.update_pixels*2+first_step in
https://github.com/dydjw9/MetaAttack_ICLR2020/blob/master/meta_attack/meta_attack_cifar/test_all.py#L130
(Could you explain for me simply and I cannot quite understand , is this the query number calculation that I can report in my paper?)

Can you help me to set the maximum queries threshold and once the loop iteration in attack > this threshold, break the algorithm. I will cite your paper ! Thank you and I am hurry for ddl.

Question regarding meta_training

Hi, I used your code to train a meta_learning. During training, I found that after several epochs, there is a big difference of the acc for training and test.
e.g.
Training acc: [7.49062896e-02 1.20647939e+01 5.44389091e-08 4.33921817e-07
2.67942748e-06 1.42312044e-05 2.44339314e-06]
Test acc: [19.58 18.14 16.7 15.38 14.17 13.09 12.086 11.31 10.74 10.484
10.41 10.24 10.21 10.2 10.2 10.23 10.25 10.28 10.32 10.34
10.37 ]
Here I trained a classifier and acc has been defined as cross entropy. Is the output correct?

Files for generating gradient for CIFAR dataset

Can you please provide the stored generated gradient files for cifar dataset? or could you please elaborate how to generate? Because, right now it is not saving the gradients, it only stores the zoo_ckpt.t7

Questions regarding the evaluation

print("[STATS][L1] total = {}, seq = {}, time = {:.3f}, success = {}, distortion = {:.5f}, avg_step = {:.5f}, avg_query = {:.5f},success_rate = {:.3f}".format(img_no, i, avg_time / total_success, success, l2_total / total_success, avg_step / total_success, avg_que / total_success, total_success / float(img_no)))

Did you use the last print statement to report the metrics or did you average out all the success rate, # of queries etc for each batch in the test loader? Right now, it looks like that it prints out the metrics values for each batch in the test loader.

Do you use reptile rather than maml in training meta-learner?

I notice that your code of training meta-learner auto-encoder has an argument --meta_optim_choose, like the following code use this argument.
https://github.com/dydjw9/MetaAttack_ICLR2020/blob/master/meta_training/cifar_meta_training/meta.py#L128

But according to your paper, you use reptile.
In order to compare with your approach of compared method, Should I set --meta_optim_choose reptile?

In other words , should I delete the maml code in my experiments?

Bug report, the missing logits in save_gradient function?

In save_gradient function
https://github.com/dydjw9/MetaAttack_ICLR2020/blob/master/grad_gen/cifar_gen_grad_for_meta/utils.py#L169
in this line ,you just save process_data[str(batch_idx)] = [data, grad, target].
But in data reader python, for example, in this line:
https://github.com/dydjw9/MetaAttack_ICLR2020/blob/master/meta_training/imagenet_meta_training/imagenet.py#L60
How should you read logits that have been never stored to npy file before?

How to train a meta attacker in Tiny ImageNet?

Hi! When I reproduced meta attack on Tiny ImageNet, the success rate of the attack was only 60%. I checked and found that the meta attacker was not well trained. I found in meta_attack_imagenet file, the meta_path is 0.82605cifar_resnet34.pt, is this the meta attacker you trained? Can you share some tips for training meta attackers?

Any suggestion for high resolution dataset experiments?

I recently experiment the code on the ImageNet dataset (224x224 resolution) in a Nvidia 1080Ti(11Gb) GPU.
But it fails because the code of output = F.softmax(model(img_var), dim = 1) will consume too much GPU memory which is larger than 11Gb.
https://github.com/dydjw9/MetaAttack_ICLR2020/blob/master/meta_attack/meta_attack_cifar/attacks/generate_gradient.py#L85

Because args.update_pixels = 128, which means a batch of 128 * 2 + 1= 257 224x224 images will be sent to victim model.

How to solve this issue and any suggestion about experimenting on large resolution data like ImageNet?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.