Git Product home page Git Product logo

adversarial-attacks-pytorch's Introduction

Welcome,

  • 🎓 Postdoctoral Researcher at Seoul National University
  • 📘 Machine Learning & Deep Learning
  • 🔥 Adversarial Robustness & Generalization

adversarial-attacks-pytorch's People

Contributors

buhua-liu avatar framartin avatar freed-wu avatar harry24k avatar hkunzhe avatar ignaciogavier avatar jaryp avatar jatanloya avatar khalooei avatar lukasstruppek avatar noppelmax avatar rikonaka avatar tao-bai avatar yijiangpang avatar zhijin-ge avatar zhuangzi926 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adversarial-attacks-pytorch's Issues

Question about normalize during inference(attack) phase

Hi author(s),

I have a question about the normalize operation with torchattacks.

During the training phase, I use the normalize operation on the training data, for example,


transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])

Therefore, I need to perform the normalize operation during the inference (test, attack) phase.
However, in torchattacks, the input data must be in the range of [0, 1].
So how to deal with this problem?

Best wishes,
Gavin

use SciPy > = 1.8.0

It is recommended to use SciPy > = 1.8.0 to be compatible with other algorithm libraries. The lower version of SciPy will cause the following statement to report an error.

#D:\anaconda3\envs\nn\Lib\site-packages\torchattacks\attacks\_differential_evolution.py
from scipy.optimize.optimize import _status_message

==> from scipy.optimize._optimize import _status_message
Art has the same problem and is being modified.

can the model (attacked) be trained with image augmentation?

Hi Harry,

Thanks for your awesome lib. When I use your code, I find that the attack module requires the input images to be in the range of [0,1]. Does it mean that the model to be attacked has to be trained with the input with a range of [0,1]? What if I have a model that is trained with images that are augmented? Is there a way to include the image transformation into the attack process?

Best,
Hao

Be careful with training mode

When change training mode of the given model after initializing Attack object in some use cases, training mode will be changed inadvertently. See the following toy example:

import torchattacks
# initialize model
print(model.training)  # model.training = True
atk = torchattacks.PGD(model, eps=8/255, alpha=2/255, steps=4)
model.eval()  # model.training = False
adversarial_images = atk(images, labels)
print(model.training)  # model.training = True

As the documentation says:

It temporarily changes the model’s training mode to test by .eval() only during an attack process.

The current implementation sets the training attribute based on the given model in initializing Attack object rather than the current model. I think it's better to add this into precautions or this PR will fix the issue.

The attack accuracy on Cifra10

Hi! I trained resnet18 on the Cifra10 and used FGSM to generate adversarial images.
But I found that the accuracy didn't decrease significantly when the eps was set at (2/255,4/255,8/255).
The accuracy on the clean images is 93%. When I add FGSM, the accuary is: 55.8% for eps=2/255, 55.3% for eps=4/255, 53.5% for 8/255.
I think when the eps changes, the accuracy should drop drastically. This test result makes me feel a little puzzled。

AutoAttack

Hi, I feel confused about the multi-attack and target apgdt. (1) For the standard autoattack, do all images are attacked by 4 attacks respectively and calculate the average robust accuracy after all? (2) And for target apgdt, how the target label is found? I see that the number of target label is equal to the number of total class minus one for CIFAR-10. The question is in this line (

y_target = output.sort(dim=1)[1][:, -self.target_class]
). If I want to use it for the ImageNet, should I set n_target_classes = 999?

Looking forwards to your help! Thanks!

device not same problem.

Hello Harry,
Thanks for your code.
Here’s the situation I meet:
My server has two gpus, which are cuda: 0 and cuda: 1. However, if I send my model to cuda: 1, then attack would change it to cuda: 0 instead.

Code is here:
torchattacks/attack.py line 20
self.device = torch.device("cuda" if next(model.parameters()).is_cuda else "cpu")

Could it be okay to change it to:
self.device = next(model.parameters()).device

I test the code based on torch=1.4.0
This could help me maintain the same device in both my model and datasets.
Thanks a lot.

Ruqi Bai

Cannot set requires_grad in deepfool

The deepfool method fails due the error below

  File "/dccstor/ddig/jbtang/tools/anaconda3/envs/python3/lib/python3.6/site-packages/torchattacks/attacks/deepfool.py", line 29, in forward
    image.requires_grad = True
RuntimeError: you can only change requires_grad flags of leaf variables.

Question about adversarial training

If we specify the attack used in adversarial training out of the training loop as in the MNIST adversarial training demo, will the attack model parameters be updated along with the training?

small implementation error in DeepFool attack

I am quite new to adverserial attacks, so the current behaviour might be intended.

I currently use your DeepFool implementation as an estimate for the distance to the decision boundary of a classifier, as suggested in the paper: https://arxiv.org/abs/2002.01810v1

If the model already predicts a wrong label for a given input sample, then the perturbation returned by the DeepFool attack is the original image (

), whereas I think that just returning torch.zeros_like(image) would be formally correct.

Changing this would not only correct the estimate on the decision boundary, but also the doubling of the final adversarial sample (image + perturbation) if the classifier misclassified from the start.

Question about perturbation PGD

Hello,

Firstly thanks for your great work!

I have a question about eps in atk = torchattacks.PGD(model, eps=8/255, alpha=2/255, steps=4)

It is showed that eps (float): maximum perturbation. (DEFALUT : 0.3).

So I want to know does that mean eps = maximal l-infinity norm of adv - img ? cuz I've run some experiments by PGD and the l-infity norm I calculated significantly exceeded the eps value.

So I really want to know is it because I got some mistakes in my calculation ? or eps: the maximal perturbation has some different meaning

Advice for calculating the time in demo code

Firstly, thanks for the nice contributions.

I have a little advice for your demo code :

https://github.com/Harry24k/adversarial-attacks-pytorch/blob/master/demos/White%20Box%20Attack%20(ImageNet).ipynb

The third part:3. Adversarial Attack
in the loop
"
for images, labels in data_loader:
start = time.time()
……
"
I think 'start' should be outside of the loop. I know the demo only have an image in the dataset, but when I apply it to other common datasets with batch size larger than 1, it may lead to some confusion.

CW Attack not working as desired

I am using CW targeted attack on a Convolution Neural Network trained for FashionMNIST. However I do not see any change in accuracy after the CW attack. I have tried using c =1, 10 and 100. I also tried using different learning rates like 0.001 and 0.0001. I have used different steps, like 100 and 1000.
However when I use targeted FGSM and PGD attacks, those work fine.

I need your support to help me figure out why the CW attack isn't working.
File with my code is attached.
FMNIST_CNN.txt

No module named 'scipy._lib.six

When importing torchattacks using import torchattacks I have got an error stating No module named 'scipy._lib.six' for a freshly install scipy 1.6.3.
Installing scipy 1.4 resolves the problem.

How can I attack a image which under normalization?

How can I add perturbation on a image which process by normalization from dataloader. Many model was trained under norm, so I must keep this processing. I have saw the comments in the code:

images: :math:(N, C, H, W) where N = number of batches, C = number of channels, H = height and W = width. It must have a range [0, 1].
It means that it is not supported the norm yet?

Waiting for the update of JSMA attack method

hello,I'm a junior.I am learning adv_examples recently.It is very lucky to find this project, it helps me a lot.
I just wondering whether this project will support JSMA attack method,hahaha.
Waiting for the update of JSMA attack method

cifar10 attacks question

Standard Accuracy -- No confrontation training, the accuracy rate is 40%, only 30%

# !/usr/bin/env python
# -- coding: utf-8 --
# @Author zengxiaohui
# Datatime:8/26/2021 8:56 AM
# @File:test_FGSM
import torch
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import torch.nn as nn
from tqdm import tqdm

from python_developer_tools.cv.utils.torch_utils import init_seeds
from python_developer_tools.cv.train.对抗训练.adversarialattackspytorchmaster.torchattacks import *

transform = transforms.Compose(
    [transforms.ToTensor(),# ToTensor : [0, 255] -> [0, 1]
     # transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
     ])

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')


def shufflenet_v2_x0_5(nc, pretrained):
    model_ft = torchvision.models.shufflenet_v2_x0_5(pretrained=pretrained)
    num_ftrs = model_ft.fc.in_features
    model_ft.fc = nn.Linear(num_ftrs, nc)
    return model_ft


if __name__ == '__main__':
    root_dir = "/home/zengxh/datasets"
    # os.environ['CUDA_VISIBLE_DEVICES'] = '1'
    epochs = 50
    batch_size = 1024
    num_workers = 8
    classes = 10

    init_seeds(1024)

    trainset = torchvision.datasets.CIFAR10(root=root_dir, train=True, download=True, transform=transform)
    trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers,
                                              pin_memory=True)

    testset = torchvision.datasets.CIFAR10(root=root_dir, train=False, download=True, transform=transform)
    testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=num_workers)

    model = shufflenet_v2_x0_5(classes, True)
    model.cuda()
    model.train()

    criterion = nn.CrossEntropyLoss()
    # SGD with momentum
    optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
    scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=epochs)

    atks = [
        FGSM(model, eps=8 / 255),
        BIM(model, eps=8 / 255, alpha=2 / 255, steps=100),
        RFGSM(model, eps=8 / 255, alpha=2 / 255, steps=100),
        CW(model, c=1, lr=0.01, steps=100, kappa=0),
        PGD(model, eps=8 / 255, alpha=2 / 225, steps=100, random_start=True),
        PGDL2(model, eps=1, alpha=0.2, steps=100),
        EOTPGD(model, eps=8 / 255, alpha=2 / 255, steps=100, eot_iter=2),
        FFGSM(model, eps=8 / 255, alpha=10 / 255),
        TPGD(model, eps=8 / 255, alpha=2 / 255, steps=100),
        MIFGSM(model, eps=8 / 255, alpha=2 / 255, steps=100, decay=0.1),
        VANILA(model),
        GN(model, sigma=0.1),
        APGD(model, eps=8 / 255, steps=100, eot_iter=1, n_restarts=1, loss='ce'),
        APGD(model, eps=8 / 255, steps=100, eot_iter=1, n_restarts=1, loss='dlr'),
        APGDT(model, eps=8 / 255, steps=100, eot_iter=1, n_restarts=1),
        FAB(model, eps=8 / 255, steps=100, n_classes=10, n_restarts=1, targeted=False),
        FAB(model, eps=8 / 255, steps=100, n_classes=10, n_restarts=1, targeted=True),
        Square(model, eps=8 / 255, n_queries=5000, n_restarts=1, loss='ce'),
        AutoAttack(model, eps=8 / 255, n_classes=10, version='standard'),
        OnePixel(model, pixels=5, inf_batch=50),
        DeepFool(model, steps=100),
        DIFGSM(model, eps=8 / 255, alpha=2 / 255, steps=100, diversity_prob=0.5, resize_rate=0.9)
    ]

    bestatk = None
    bestRobustAcc = 0
    for atk in atks:
        print("-" * 70)
        print(atk)
        correct = 0
        model.eval()
        for j, (images, labels) in tqdm(enumerate(trainloader)):
            adv_images = atk(images, labels)
            outputs = model(adv_images.cuda())
            _, predicted = torch.max(outputs.data, 1)
            correct += (predicted.cpu() == labels).sum()
        bestRobustAcc_now = correct / len(trainset)
        print('Robust Accuracy: %.4f %%' % (bestRobustAcc_now))
        if bestRobustAcc < bestRobustAcc_now:
            bestatk = atk
            bestRobustAcc = bestRobustAcc_now

    for epoch in range(epochs):
        train_loss = 0.0
        for i, (inputs, labels) in tqdm(enumerate(trainloader)):
            inputs = atk(inputs, labels).cuda()
            labels = labels.cuda()

            # zero the parameter gradients
            optimizer.zero_grad()

            # forward
            outputs = model(inputs)
            # loss
            loss = criterion(outputs, labels)
            # backward
            loss.backward()
            # update weights
            optimizer.step()

            # print statistics
            train_loss += loss

        scheduler.step()
        print('%d/%d loss: %.3f' % (epochs, epoch + 1, train_loss / len(trainset)))

    # Standard Accuracy
    correct = 0
    model.eval()
    for j, (images, labels) in tqdm(enumerate(testloader)):
        outputs = model(images.cuda())
        _, predicted = torch.max(outputs.data, 1)
        correct += (predicted.cpu() == labels).sum()
    print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / len(testset)))

    # Robust Accuracy
    correct = 0
    model.eval()
    atk.set_training_mode(training=False)
    for j, (images, labels) in tqdm(enumerate(testloader)):
        images = atk(images, labels).cuda()
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        correct += (predicted.cpu() == labels).sum()
    print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / len(testset)))

using carlini wagner l_inf condition

Thank you for the wonderful code. I was going through the repo and realized we do not have an implementation for Carlini Wagner l_inf attack. I was hoping to use the repo and also include l_inf condition. Can you please help me a bit, a few pointers in order to code it would be great

Supporting nn.BCEWithLogitsLoss() ?

I try a multi-label classification problem with torchattacks.FGSM(net, eps=0.1)
image

I don't know if this make sense, but considering adding support for different type of loss function such as nn.BCEWithLogitsLoss() may be good?

Pytorch moved `zero_gradients` out of gradcheck (apparently?)

installing torchattacks from scratch on a fresh HPC environment, I hit the error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../.conda/envs/stabn/lib/python3.8/site-packages/torchattacks/__init__.py", line 17, in <module>
    from .attacks.fab import FAB
  File ".../.conda/envs/stabn/lib/python3.8/site-packages/torchattacks/attacks/fab.py", line 12, in <module>
    from torch.autograd.gradcheck import zero_gradients
ImportError: cannot import name 'zero_gradients' from 'torch.autograd.gradcheck' (.../.conda/envs/stabn/lib/python3.8/site-packages/torch/autograd/gradcheck.py)

Notably, when investigating the gradcheck.py file, zero_gradients is no longer defined within and a significant refactor was mentioned in the relasenotes for pytorch version 1.9 17 days ago

I think I can just use the LTS release which seems not to have this change, but I think it's notable enough to raise as an issue.

Error on Running the Cifar10 demo jynb

Hello ,

I am running the cifar10 demo and I am having this error once I load the model.pth checkpoint saved.

RuntimeError Traceback (most recent call last)
in
----> 1 model.load_state_dict(torch.load("/home/jovyan/.cache/torch/checkpoints/resnext50_32x4d-7cdf4587.pth"))
2 model = model.eval()

/srv/conda/envs/notebook/lib/python3.7/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
828 if len(error_msgs) > 0:
829 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
--> 830 self.class.name, "\n\t".join(error_msgs)))
831 return _IncompatibleKeys(missing_keys, unexpected_keys)
832

RuntimeError: Error(s) in loading state_dict for Target:
Missing key(s) in state_dict: "conv_layer.0.weight", "conv_layer.0.bias", "conv_layer.1.weight", "conv_layer.1.bias", "conv_layer.4.weight", "conv_layer.4.bias", "conv_layer.5.weight", "conv_layer.5.bias", "conv_layer.7.weight", "conv_layer.7.bias", "conv_layer.8.weight", "conv_layer.8.bias", "conv_layer.11.weight", "conv_layer.11.bias", "conv_layer.12.weight", "conv_layer.12.bias", "conv_layer.14.weight", "conv_layer.14.bias", "conv_layer.15.weight", "conv_layer.15.bias", "conv_layer.18.weight", "conv_layer.18.bias", "conv_layer.19.weight", "conv_layer.19.bias", "conv_layer.21.weight", "conv_layer.21.bias", "conv_layer.22.weight", "conv_layer.22.bias", "conv_layer.24.weight", "conv_layer.24.bias"

Thank you in advance :)

Device error

Hi Harry

I just tried with the demo code, however it pop up the following error.

torchattacks/attack.py", line 26, in init
self.device = next(model.parameters()).device
StopIteration:

Ragarding the attacking mode in Class Attack.

Hi Harry, I see the below codes and feel a little confused.
When performing untargeted attacks, self._targeted = 1

cost = self._targeted*loss(outputs, labels)
grad = torch.autograd.grad(cost, images,retain_graph=False, create_graph=False)[0]
adv_images = images + self.eps*grad.sign()

Is it easier to understand to set self._targeted = -1 for untargeted attacks?
and modify corresponding lines

cost = self._targeted*loss(outputs, labels)
grad = torch.autograd.grad(cost, images,retain_graph=False, create_graph=False)[0]
adv_images = images - self.eps*grad.sign()

What does w.detach_() do in cw.py?

Hi, I'm a newbie just started to study adversarial examples.

I'm curious what does w.detach_()(line 76) do in torchattacks/attacks/cw.py?

I think torch should record all operation applied to w.

Weird colors in the output of my attack

I am using PyTorch CIFAR-10 dataset, with the following transformations:

transform_train = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])

The clean images are showing like this:
Screen Shot 2020-11-09 at 5 26 46 PM
The attacked images with attack = CW(net, c=0.0004, kappa=0, steps=10, lr=0.01), are like this:
Screen Shot 2020-11-09 at 5 27 06 PM

From the original paper, we expect the images to have few to none visual difference, but I am seeing a big color offset. Do you have any hint on what I could be doing wrong in my code?

Thank you for the library, it is of GREAT HELP.

How to Change Distance Measure in FGSM

Hi harry,

I have run the FGSM with default distance measure Linf. I can see there is PGDL2 class, however, I couldn't find FGSML2, How can I change it to L2 norm.

Reason for cloning the images and labels ?

Hello,
thank you for the library.

I was wondering why you clone and detach the images as the first step of many attacks?
cloning doubles the amount of memory used in GPU which reduces speed.
wouldn't zeroing the gradient for the images be sufficient?

The problem of torchattack package

Hello, I try yo apply this package on my model and it reports this problem, could you give me any suggestions to revise it? Thank you for your time.

  File "white_box_attack.py", line 1070, in <module>
    adv_images = atk(images, labels)
  File "/home/wuman/.local/lib/python3.6/site-packages/torchattacks/attack.py", line 249, in __call__
    images = self.forward(*input, **kwargs)
  File "/home/wuman/.local/lib/python3.6/site-packages/torchattacks/attacks/pgd.py", line 62, in forward
    retain_graph=False, create_graph=False)[0]
  File "/home/wuman/.local/lib/python3.6/site-packages/torch/autograd/__init__.py", line 225, in grad
    inputs, allow_unused, accumulate_grad=False)
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

normalize question

hi,i have a question about normalize,your demo White Box Attack (ImageNet).ipynb use the class Normalize you designed,it is same as the normalize in the transforms.Normalize?

If they are different, what are their differences?

Top-K Attack?

Initially searching through the code and documentation, it's not clear to me if a top-k attack (i.e., the true class should not occur within the top-k predictions) is implemented.

AutoAttack not working properly

It returns an error during a computation of a pairwise distance, because the batch size of the adversarial images isn't the same of the original input. I am using a batch size of 64. It seems to work only using a batch size of 1.

/opt/conda/lib/python3.7/site-packages/torchattacks/attack.py in __call__(self, *input, **kwargs)
    321             self.model.eval()
    322 
--> 323         images = self.forward(*input, **kwargs)
    324 
    325         if given_training:

/opt/conda/lib/python3.7/site-packages/torchattacks/attacks/autoattack.py in forward(self, images, labels)
     79         images = images.clone().detach().to(self.device)
     80         labels = labels.clone().detach().to(self.device)
---> 81         adv_images = self.autoattack(images, labels)
     82 
     83         return adv_images

/opt/conda/lib/python3.7/site-packages/torchattacks/attack.py in __call__(self, *input, **kwargs)
    321             self.model.eval()
    322 
--> 323         images = self.forward(*input, **kwargs)
    324 
    325         if given_training:

/opt/conda/lib/python3.7/site-packages/torchattacks/attacks/multiattack.py in forward(self, images, labels)
     49 
     50         for _, attack in enumerate(self.attacks):
---> 51             adv_images = attack(images[fails], labels[fails])
     52 
     53             outputs = self.model(adv_images)

/opt/conda/lib/python3.7/site-packages/torchattacks/attack.py in __call__(self, *input, **kwargs)
    321             self.model.eval()
    322 
--> 323         images = self.forward(*input, **kwargs)
    324 
    325         if given_training:

/opt/conda/lib/python3.7/site-packages/torchattacks/attacks/apgd.py in forward(self, images, labels)
     59         images = images.clone().detach().to(self.device)
     60         labels = labels.clone().detach().to(self.device)
---> 61         _, adv_images = self.perturb(images, labels, cheap=True)
     62 
     63         return adv_images

/opt/conda/lib/python3.7/site-packages/torchattacks/attacks/apgd.py in perturb(self, x_in, y_in, best_loss, cheap)
    240                     if ind_to_fool.numel() != 0:
    241                         x_to_fool, y_to_fool = x[ind_to_fool].clone(), y[ind_to_fool].clone()
--> 242                         best_curr, acc_curr, loss_curr, adv_curr = self.attack_single_run(x_to_fool, y_to_fool)
    243                         ind_curr = (acc_curr == 0).nonzero().squeeze()
    244                         #

/opt/conda/lib/python3.7/site-packages/torchattacks/attacks/apgd.py in attack_single_run(self, x_in, y_in)
    111         for _ in range(self.eot_iter):
    112             with torch.enable_grad():
--> 113                 logits = self.model(x_adv) # 1 forward pass (eot_iter = 1)
    114                 loss_indiv = criterion_indiv(logits, y)
    115                 loss = loss_indiv.sum()

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

/tmp/ipykernel_33/3929415858.py in forward(self, img1)
     49         emb2 = self.get_embedding(self.img2)
     50 
---> 51         dist = F.pairwise_distance(emb1, emb2, keepdim=True)
     52         sim = 1 - F.cosine_similarity(emb1, emb2, dim=1).unsqueeze(dim=1)
     53 

RuntimeError: The size of tensor a (26) must match the size of tensor b (64) at non-singleton dimension 0

Cannot save adversarial examples when using MultiAttack

Hi,

I encountered errors when using MultiAttack.
Here is my code segment:

# cifar100_eval_loader = ... (initialize my dataloader)
eps = 8 / 255
alpha = 2 / 255
atk1 = TIFGSM(srcnet, eps=eps, alpha=alpha, steps=40)
atk2 = DIFGSM(srcnet, eps=eps, alpha=alpha, steps=40)
atk = MultiAttack([atk1, atk2])
atk.set_return_type('int')  # Save as integer.
atk.save(data_loader=cifar100_eval_loader, save_path=save_path, verbose=True)

However, the following error messages occurs:

- Save complete!
Traceback (most recent call last):
  File "/tmp2/attack/src/adv_attack.py", line 162, in <module>
    atk.save(data_loader=cifar100_eval_loader, save_path=save_path, verbose=True)
  File "/home/tsunghan/miniconda3/envs/SPML/lib/python3.9/site-packages/torchattacks/attacks/multiattack.py", line 105, in save
    rob_acc, l2, elapsed_time = super().save(data_loader, save_path, verbose, return_verbose)
TypeError: cannot unpack non-iterable NoneType object

I wonder that whether the MultiAttack.save() method contains bugs?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.