Git Product home page Git Product logo

diffpure's Introduction

Diffusion Models for Adversarial Purification

Official PyTorch implementation of the ICML 2022 paper:
Diffusion Models for Adversarial Purification
Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, Anima Anandkumar
https://diffpure.github.io

Abstract: Adversarial purification refers to a class of defense methods that remove adversarial perturbations using a generative model. These methods do not make assumptions on the form of attack and the classification model, and thus can defend pre-existing classifiers against unseen threats. However, their performance currently falls behind adversarial training methods. In this work, we propose DiffPure that uses diffusion models for adversarial purification: Given an adversarial example, we first diffuse it with a small amount of noise following a forward diffusion process, and then recover the clean image through a reverse generative process. To evaluate our method against strong adaptive attacks in an efficient and scalable way, we propose to use the adjoint method to compute full gradients of the reverse generative process. Extensive experiments on three image datasets including CIFAR-10, ImageNet and CelebA-HQ with three classifier architectures including ResNet, WideResNet and ViT demonstrate that our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods, often by a large margin.

Requirements

  • 1-4 high-end NVIDIA GPUs with 32 GB of memory.
  • 64-bit Python 3.8.
  • CUDA=11.0 and docker must be installed first.
  • Installation of the required library dependencies with Docker:
    docker build -f diffpure.Dockerfile --tag=diffpure:0.0.1 .
    docker run -it -d --gpus 0 --name diffpure --shm-size 8G -v $(pwd):/workspace -p 5001:6006 diffpure:0.0.1
    docker exec -it diffpure bash

Data and pre-trained models

Before running our code on ImageNet and CelebA-HQ, you have to first download these two datasets. For example, you can follow the instructions to download CelebA-HQ. Note that we use the LMDB format for ImageNet, so you may need to convert the ImageNet dataset to LMDB. There is no need to download CIFAR-10 separately.

Note that you have to put all the datasets in the datasest directory.

For the pre-trained diffusion models, you need to first download them from the following links:

For the pre-trained classifiers, most of them do not need to be downloaded separately, except for

Note that you have to put all the pretrained models in the pretrained directory.

Run experiments on CIFAR-10

AutoAttack Linf

  • To get results of defending against AutoAttack Linf (the Rand version):
cd run_scripts/cifar10
bash run_cifar_rand_inf.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_rand_inf_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_rand_inf_rn50.sh [seed_id] [data_id]  # ResNet-50
  • To get results of defending against AutoAttack Linf (the Standard version):
cd run_scripts/cifar10
bash run_cifar_stand_inf.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_stand_inf_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_stand_inf_rn50.sh [seed_id] [data_id]  # ResNet-50

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and eight seeds (e.g., 0..7) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly. To measure the worse-case defense performance of our method, the reported robust accuracy is the minimum robust accuracy of these two versions: Rand and Standard.

AutoAttack L2

  • To get results of defending against AutoAttack L2 (the Rand version):
cd run_scripts/cifar10
bash run_cifar_rand_L2.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_rand_L2_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_rand_L2_rn50.sh [seed_id] [data_id]  # ResNet-50
  • To get results of defending against AutoAttack L2 (the Standard version):
cd run_scripts/cifar10
bash run_cifar_stand_L2.sh [seed_id] [data_id]  # WideResNet-28-10
bash run_cifar_stand_L2_70-16-dp.sh [seed_id] [data_id]  # WideResNet-70-16
bash run_cifar_stand_L2_rn50.sh [seed_id] [data_id]  # ResNet-50

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and eight seeds (e.g., 0..7) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly. To measure the worse-case defense performance of our method, the reported robust accuracy is the minimum robust accuracy of these two versions: Rand and Standard.

StAdv

  • To get results of defending against StAdv:
cd run_scripts/cifar10
bash run_cifar_stadv_rn50.sh [seed_id] [data_id]  # ResNet-50

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and eight seeds (e.g., 0..7) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly.

BPDA+EOT

  • To get results of defending against BPDA+EOT:
cd run_scripts/cifar10
bash run_cifar_bpda_eot.sh [seed_id] [data_id]  # WideResNet-28-10

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and five seeds (e.g., 0..4) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly.

Run experiments on ImageNet

AutoAttack Linf

  • To get results of defending against AutoAttack Linf (the Rand version):
cd run_scripts/imagenet
bash run_in_rand_inf.sh [seed_id] [data_id]  # ResNet-50
bash run_in_rand_inf_50-2.sh [seed_id] [data_id]  # WideResNet-50-2
bash run_in_rand_inf_deits.sh [seed_id] [data_id]  # DeiT-S
  • To get results of defending against AutoAttack Linf (the Standard version):
cd run_scripts/imagenet
bash run_in_stand_inf.sh [seed_id] [data_id]  # ResNet-50
bash run_in_stand_inf_50-2.sh [seed_id] [data_id]  # WideResNet-50-2
bash run_in_stand_inf_deits.sh [seed_id] [data_id]  # DeiT-S

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and 32 seeds (e.g., 0..31) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly. To measure the worse-case defense performance of our method, the reported robust accuracy is the minimum robust accuracy of these two versions: Rand and Standard.

Run experiments on CelebA-HQ

BPDA+EOT

  • To get results of defending against BPDA+EOT:
cd run_scripts/celebahq
bash run_celebahq_bpda_glasses.sh [seed_id] [data_id]  # the glasses attribute
bash run_celebahq_bpda_smiling.sh [seed_id] [data_id]  # the smiling attribute

Note that [seed_id] is used for getting error bars, and [data_id] is used for sampling a fixed set of images.

To reproduce the numbers in the paper, we recommend using three seeds (e.g., 121..123) for [seed_id] and 64 seeds (e.g., 0..63) for [data_id], and averaging all the results across [seed_id] and [data_id], accordingly.

License

Please check the LICENSE file. This work may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact [email protected].

Citation

Please cite our paper, if you happen to use this codebase:

@inproceedings{nie2022DiffPure,
  title={Diffusion Models for Adversarial Purification},
  author={Nie, Weili and Guo, Brandon and Huang, Yujia and Xiao, Chaowei and Vahdat, Arash and Anandkumar, Anima},
  booktitle = {International Conference on Machine Learning (ICML)},
  year={2022}
}

diffpure's People

Contributors

weilinie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

diffpure's Issues

Failed to reproduce the results of attack on ODE?

I tried to run attack on ODE. However, the results differ a lot from the reported results in the paper. I got the accuracy of 68% of Linf attack on cifar10 with 8/255. However, the reported result in the paper is 39.86%. Here is my script. Is it anything wrong?

            CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python eval_sde_adv.py --exp ./exp_results --config cifar10.yml \
            -i xxx \
            --t 100 \
            --adv_eps 0.031373 \
            --adv_batch_size 8 \
            --num_sub 64 \
            --domain cifar10 \
            --classifier_name cifar10-wideresnet-28-10 \
            --seed $seed \
            --data_seed $data_seed \
            --diffusion_type ode \
            --score_type score_sde \
            --attack_version rand \
            --eot_iter 20 \

Convert the checkpoint saved by flax to .pth

Thanks for your excellent work and published code! About the checkpoint trained by the Score SDE (published by song yang), how can we transfer the checkpoint save by flax.training.checkpoints.save_checkpoint to the checkpoint which can be loaded by torch?

Running "bash run_celebahq_bpda_glasses.sh" on celeba-HQ,

I get this result:

Attack 50 of 50   Batch defended: 1 of 2
finished 0-th batch in attack_all
init acc: 50.00%, robust acc: 50.00%, time elapsed: 24673.68s
x_adv_sde shape: torch.Size([2, 3, 256, 256])

What does 50% mean? Anything went wrong?

AmazonAWS Access Denied when downloading 'Guided Diffusion for ImageNet' and 'DDPM for CelebA-HQ'

When I entered the download link, I received the following message:

This XML file does not appear to have any style information associated with it. The document tree is shown below.

<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>ZPTSV4ZAGP1QP15N</RequestId>
<HostId>lECu/MIwSyPYNeWPAnLNWxvsfHlkcNxjXq+k65hiwhjcA887nHLbP3wAvUJ7h5FjsFM0X2O3RwoFs1qG1Zbw2g==</HostId>
</Error>

Failed to launch in A100 nvcc fatal : Unsupported gpu architecture 'compute_80'

Thanks for sharing, it is a wonderful job.

I run it in Nvidia A100 machine.
image
CUDA is 11.4.

However, it has a problem to load library package.

Traceback (most recent call last):
  File "/anaconda/envs/RobustART/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1723, in _run_ninja_build
    env=env)
  File "/anaconda/envs/RobustART/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "eval_sde_adv.py", line 29, in <module>
    from runners.diffpure_sde import RevGuidedDiffusion
  File "/home/v-junyan1/adv_training/DiffPure/runners/diffpure_sde.py", line 17, in <module>
    from score_sde.losses import get_optimizer
  File "/home/v-junyan1/adv_training/DiffPure/score_sde/losses.py", line 22, in <module>
    from .models import utils as mutils
  File "/home/v-junyan1/adv_training/DiffPure/score_sde/models/__init__.py", line 15, in <module>
    from . import  ncsnpp
  File "/home/v-junyan1/adv_training/DiffPure/score_sde/models/ncsnpp.py", line 18, in <module>
    from . import utils, layers, layerspp, normalization
  File "/home/v-junyan1/adv_training/DiffPure/score_sde/models/layerspp.py", line 20, in <module>
    from . import up_or_down_sampling
  File "/home/v-junyan1/adv_training/DiffPure/score_sde/models/up_or_down_sampling.py", line 18, in <module>
    from ..op import upfirdn2d
  File "/home/v-junyan1/adv_training/DiffPure/score_sde/op/__init__.py", line 9, in <module>
    from .fused_act import FusedLeakyReLU, fused_leaky_relu
  File "/home/v-junyan1/adv_training/DiffPure/score_sde/op/fused_act.py", line 23, in <module>
    os.path.join(module_path, "fused_bias_act_kernel.cu"),
  File "/anaconda/envs/RobustART/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1136, in load
    keep_intermediates=keep_intermediates)
  File "/anaconda/envs/RobustART/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1347, in _jit_compile
    is_standalone=is_standalone)
  File "/anaconda/envs/RobustART/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1452, in _write_ninja_file_and_build_library
    error_prefix=f"Error building extension '{name}'")
  File "/anaconda/envs/RobustART/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1733, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused': [1/2] /usr/bin/nvcc  -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include/TH -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include/THC -isystem /anaconda/envs/RobustART/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -std=c++14 -c /home/v-junyan1/adv_training/DiffPure/score_sde/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
FAILED: fused_bias_act_kernel.cuda.o
/usr/bin/nvcc  -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include/TH -isystem /anaconda/envs/RobustART/lib/python3.6/site-packages/torch/include/THC -isystem /anaconda/envs/RobustART/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -std=c++14 -c /home/v-junyan1/adv_training/DiffPure/score_sde/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
nvcc fatal   : Unsupported gpu architecture 'compute_80'
ninja: build stopped: subcommand failed.

This error happens when running eval_sde_adv.py.

How could I resolve the problem of nvcc fatal : Unsupported gpu architecture 'compute_80'

Should I change the hardware system to run the code or should I implement other workarounds?

I wish you are all good.

Thanks & Regards!
Momo

Error when loading pretrained CIFAR10 checkpoint

Hello,

I get the below error when trying to execute this command from the README. I have downloaded the pre-trained models and put in a pretrained dir, renamed them to what the code expects (from checkpoint_8 to checkpoint_8.pth and put them in a score_sde directory)

It looks to me that this file is being loaded with torch.load in this repo here but in the original code it is loaded with tf.io.gfile.GFile at https://github.com/yang-song/score_sde/blob/main/utils.py#L46

Is there some pre-processing I should be doing with this pre-trained model file? Thanks for your help! Error below...

root@c5460383ad15:/workspace/run_scripts/cifar10# bash run_cifar_rand_inf.sh 1 1  
INFO - eval_sde_adv.py - 2022-06-17 12:43:49,903 - Using device: cuda
ngpus: 1, adv_batch_size: 64
starting the model and loader...
using cifar10 wideresnet-28-10...
diffusion_type: sde
model_config: Namespace(data=Namespace(category='cifar10', centered=True, dataset='CIFAR10', image_size=32, num_channels=3, random_flip=True, uniform_dequantization=False), device=device(type='cuda'), model=Namespace(attention_type='ddpm', attn_resolutions=[16], beta_max=20.0, beta_min=0.1, ch_mult=[1, 2, 2, 2], conditional=True, conv_size=3, dropout=0.1, ema_rate=0.9999, embedding_type='positional', fir=False, fir_kernel=[1, 3, 3, 1], fourier_scale=16, init_scale=0.0, name='ncsnpp', nf=128, nonlinearity='swish', normalization='GroupNorm', num_res_blocks=8, num_scales=1000, progressive='none', progressive_combine='sum', progressive_input='none', resamp_with_conv=True, resblock_type='biggan', scale_by_sigma=False, sigma_max=50, sigma_min=0.01, skip_rescale=True), optim=Namespace(beta1=0.9, eps=1e-08, grad_clip=1.0, lr=0.0002, optimizer='Adam', warmup=5000, weight_decay=0), sampling=Namespace(corrector='none', method='pc', n_steps_each=1, noise_removal=True, predictor='euler_maruyama', probability_flow=False, snr=0.16), training=Namespace(continuous=True, n_iters=950001, reduce_mean=True, sde='vpsde'))
Traceback (most recent call last):
  File "eval_sde_adv.py", line 322, in <module>
    robustness_eval(args, config)
  File "eval_sde_adv.py", line 224, in robustness_eval
    model = SDE_Adv_Model(args, config)
  File "eval_sde_adv.py", line 47, in __init__
    self.runner = RevGuidedDiffusion(args, config, device=config.device)
  File "/workspace/runners/diffpure_sde.py", line 181, in __init__
    restore_checkpoint(f'{model_dir}/checkpoint_8.pth', state, device)
  File "/workspace/runners/diffpure_sde.py", line 43, in restore_checkpoint
    loaded_state = torch.load(ckpt_dir, map_location=device)
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 595, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 764, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: unpickling stack underflow

How can I adjust the noise level?

Thanks for your great work! You mentioned different levels of noise in your paper, so I just wonder which parameters refers to t=0.1, 0.2 ,etc. Thanks!

Question about diffusion times.

Hi! I have some questions about variable 'diffusion times'.

  1. This variable name confuses me, does it refer to the round of the attack iteration? Or something else?
  2. My program goes to x_adv_sde = adversary_sde.run_standard_evaluation(x_val, y_val, bs=adv_batch_size) with diffusion times: 1245, which is still rising. What I want to know is when will it be finished?

Backprop through SDE

Hi, thank you for sharing the code! I wonder if it's possible for you to point to the code which computes the gradient of SDE by solving the augmented SDE (Sec 3.2, Proposition 3.3.).

I could see that in https://github.com/NVlabs/DiffPure/blob/master/bpda_eot/bpda_eot_attack.py#L98 returns the grads w.r.t. the purified image, and the output attack_grad is directly used in https://github.com/NVlabs/DiffPure/blob/master/bpda_eot/bpda_eot_attack.py#L86 to update the adv images. Maybe I missed something, I did not see the function to solve the augmented SDE (Eq.6)

Thank you for your help!

ValueError: attacks_to_run will be overridden unless you use version='custom'

I just run:
python3 eval_sde_adv.py --exp ./exp_results --config cifar10.yml \ -i cifar10-robust_adv-$t-eps$adv_eps-64x1-bm0-t0-end1e-5-cont-eot20 \ --t 100 \ --adv_eps 0.031373 \ --adv_batch_size 128 \ --num_sub 64 \ --domain cifar10 \ --classifier_name cifar10-wideresnet-28-10 \ --seed 121 \ --data_seed 6 \ --diffusion_type ode\ --score_type score_sde \ --attack_version rand \ --eot_iter 20 \
And get the error:
raise ValueError("attacks_to_run will be overridden unless you use version='custom'") ValueError: attacks_to_run will be overridden unless you use version='custom'
I also changed the 'rand' to 'standard', just got the same error.

run_cifar_rand_inf.sh robust accuracy 0%

Hi, thanks for the great contribution. I'm trying to reproduce the results of the paper (Table6).

I've the tried autoattack rand inf, using the following script:

bash run_cifar_rand_inf.sh 0 0

On the experiment logs, the robust accuracy always returns 0%.

Am I doing something wrong?

Thanks!

Building container fails

The first installation command fails for me when building the Docker container. Error can be recreated by cloning the repo, cd into it, and run the installation command:

docker build -f diffpure.Dockerfile --tag=diffpure:0.0.1 .

I've attached a txt with the build output: build_output.txt

I've tried troubleshooting these errors by removing the version numbers from the pip install command in the Dockerfile like this:

RUN pip install numpy \ pyyaml \ wheel \ scipy \ torch \ torchvision \ pillow \ matplotlib \ tqdm \ tensorboardX \ seaborn \ pandas \ requests \ xvfbwrapper \ torchdiffeq \ timm \ foolbox \ torchsde \ git+https://github.com/RobustBench/robustbench.git \

but then get different errors, which is here: build_output_versions_removed.txt

Do any of these errors seem familiar? Do you know how to fix them? Any help would be appreciated.

I did notice that robustbench recently updated. Is there a specific version of robustbench I should be using?

Regarding run_cifar_rand_inf.sh error (_pickle.UnpicklingError: invalid load key, '<'.)

from robustbench.utils import load_model

model = load_model(model_name='Standard', dataset='cifar10', threat_model='Linf')

I get Standard.pt but also I get error

_pickle.UnpicklingError: invalid load key, '<'.

This is because the content of the "Standard.pt" file is, in reality, an HTML webpage used to download the model weights from Google Drive. You need to change the file extension to ".html," then open it with a web browser, and proceed to download the weights. In my example, you will obtain "natural.pt.tar." You should then rename it to "Standard.pt," as this is the file that the program is actually expecting.

china download mirror or visit this website

How should I optimize this speed?

Hello, I would like to know how long it takes to defend a picture? I feel it is relatively slow. How should I optimize this speed?

Questions about the fixed subset.

Hi, I have some questions about the fixed subset size used in cifar10 and imagenet.
The paper says it used a fixed subset of 512 images that is randomly sampled from the test set for cifar10 and imagenet.
And in the readme.md, for the autoattack in cifar10, you suggests 8 different data seeds, for bpda in cifar10, 5 seeds, and for autoattack in imagenet, 16 seeds.
However in the run_script folder, the num_sub variables are set 64, 200, 16, respectively.
Only 64x8=512 matches paper, 200x5, 16x16 don't match.

use sde to purify imagenet images, throw an error

1690427672280

x shape=1,3,256,256 but the output score of score_fn is 1,6,256,256.

What is the wrong with my setting? The following is the parameters and I did not change the default .yml file and model checkpoint.

1690427801216

Question regarding Diffusion Times of attacking one batch

Thanks for your great work. I found the inference time in table 14. However, you could see the actual time cost to run one batch of 64 images on cifar-rand-Linf-rn50.sh is so long(autoattack apgd-dlr about 118754.8seconds). I run the experiment with 4 A100 GPUs but the time shows on table14 with V100 is about 10seconds per image. Have you faced the long waiting time when you run the experiment? Is there any mistake?
slurm-56811695.txt
Screenshot 2024-03-15 at 2 11 25 am

Question about AutoAttack settings

Hi, thank you for sharing the code and models! I find in the code

if attack_version == 'standard':

adversary_resnet = AutoAttack(classifier, norm=args.lp_norm, eps=args.adv_eps,

the parameters 'version' and 'attacks_to_run' in AutoAttack class are assigned value together. However, in the AutoAttack package it's not allowed to do so.
https://github.com/fra31/auto-attack/blob/b7f560b229145e6e90613cd3ce98cad6a94bd623/autoattack/autoattack.py#L27
I wonder if you use a different version of AutoAttack.

Customized for different attacks

Greetings,

I really enjoy reading your paper! Thank you for sharing the codes!

I'm wondering if it's possible to add some instructions about how to implement DiffPure on customized attacks (white-box/ black-box; or targeted/ untargeted)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.