Git Product home page Git Product logo

advdrop's People

Contributors

rjduan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

advdrop's Issues

Y,Cb,Cr channels of the image?

Hi @RjDuan
Thank you for your work.

While creating components in following code, RGB channels of the images are being treated as YCbCr channels. If this observation is correct, could you please let me the reason behind re-naming RGB channels to YCbCr channels. Or someone should actually convert an image from RGB to YCbCr color space.

components = {'y': images[:,:,:,0], 'cb': images[:,:,:,1], 'cr': images[:,:,:,2]}

Thank you.

Question about DCT

Hi, thanks for your work. It is very interesting. But I have a question about mathmatical definition of DCT in your paper: What are the meanings of parameters ‘u, v, k, m’ in the formula? Looking forward to your reply, thanks!!!

size of image

Hi, I run your code and save the adversarial and original images. But I found that the size of adversarial image(such as 12.5KB) is bigger than original images(such as 12.4KB), is it reasonable? Then I calculated the type of color of adversarial and original images, the adversarial image has more colors than original image. It seems violating the article.

Values of the elements of Q-table

Hi, as you said in your paper (Section 3.4 Quantization) --- 'As the quantization table q should be integers during the optimization'. Do you think the data type of q table really affects the results? I mean what will happen if the elements in the q-table are float numbers. Thanks!

RuntimeError: Boolean value of Tensor with more than one value is ambiguous

@RjDuan 您好!首先感谢您的工作!
我在按照README.md中的指示运行您提供的代码(infod_sample.py)时遇到了这个问题:

D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Iter:  0
Traceback (most recent call last):
  File "D:\CCCCCCCCCCCCCCCCCCC\Python for PyCharm\AdvDrop\AdvDrop-main\infod_sample.py", line 201, in <module>
    attack = InfoDrop(resnet_model, batch_size=batch_size, q_size=q_size, steps=150, targeted=True)
  File "D:\CCCCCCCCCCCCCCCCCCC\Python for PyCharm\AdvDrop\AdvDrop-main\infod_sample.py", line 55, in __init__
    self.q_tables = {"y": torch.from_numpy(q_ini_table),
  File "D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchattacks\attack.py", line 496, in __setattr__
    for num, value in enumerate(get_all_values(value)):
  File "D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchattacks\attack.py", line 488, in get_all_values
    yield from get_all_values(item, stack)
  File "D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchattacks\attack.py", line 482, in get_all_values
    if (items not in stack):
RuntimeError: Boolean value of Tensor with more than one value is ambiguous

我发现已经关闭的issue中有一个人遇到了和我一样的的问题,但是他没有说明具体的解决方法,因此我重新开了一个issue。
我使用的软件包版本:
python 3.9.16
torch 1.13.1+cu116
numpy 1.23.5
torchattacks 3.4.0

希望能够得到您的解答!

Quantization table - initialization and update

Hi @RjDuan
Thank you for excellent work. I have a question regarding initialization and update of Q-table.

(1) In paper, you have mentioned that Q-table is initialized with value of 1 and values are gradually increased during optimization (reference line: after Eq-1 in the paper).

However, in the code Q-table is being initialized with a user-provided value e.g. 40

q_ini_table.fill(q_size)


(2) Eq-7 in the paper states that q is updated as follows: $\quad q^{'} = q + \text{sign}(\nabla_{q} \mathcal{L}(x^{'},y))\quad \text{s.t.}\quad||q^{'}-q_{\text{init}}||_{\infty} &lt; \epsilon$

Could you explain if Eq-7 corresponds correctly to the relevant part of the code:

AdvDrop/infod_sample.py

Lines 108 to 110 in 35ceeb0

for k in self.q_tables.keys():
self.q_tables[k] = self.q_tables[k].detach() - torch.sign(self.q_tables[k].grad)
self.q_tables[k] = torch.clamp(self.q_tables[k], self.factor_range[0], self.factor_range[1]).detach()

I have noted following perceived inconsistencies:
a) Negative sign is being used in the update of q-table in code, while Eq-7 uses + sign.
b) Eq-7 with $\ell_{\infty}$ norm suggests that maximum allowable difference in q-table is $\epsilon$ or $\mathrm{q_size}$ i.e. $\mathrm{max}(q^{'}-\mathrm{q_size}) \le \epsilon$, however, code part just keeps each entry of updated q-table in the range of [5, q_size] i.e. $\quad 5 \le q^{'} \le \mathrm{q_size}$

Could you help me clear the above mentioned confusions. Thank you!

a minor bug

An interesting work. A minor bug was found in the implementation of phi_diff which did not follow the equation 6 of original paper.
phi_x = torch.tanh((x - (torch.floor(x) + 0.5)) * k) * s

代码开源~

hello~, 感觉您这篇论文很棒棒,想复现一下,冒昧的问下您的代码何时会开源呢? 或者可否私发我一下,想细致学习一下~ 期待回复!!!
我的email: [email protected]

RuntimeError

I downloaded the data set, ran the code, and encountered the following error:
RuntimeError: The size of tensor a(7) must match the size of tensor b (20) at non-singleton dimension 2

关于防御模型

你好,请问关于table 2中用到的防御模型AT, Feature Squeeze, JPEG-30和PD的代码能分享一下吗

关于代码

感觉您这篇论文也很有意思,想看一下具体地实现流程,代码和实验用的相关图片数据能不能发一下呢。邮箱:[email protected]谢谢

adversarial example

Hi, I used the dataset and code you provided to make adversarial examples. Running the program, the correct rate of the adversarial samples is 100% (q=100), but the adversarial examples are sent back to the network for classification, and half of the adversarial examples do not change the original correct classification results.I don't know if you or anyone else has this problem.

OSError: image file is truncated

After I ran python infod_sample.py, it ran normally in the beginning. However, there occurred this error when iter is 48:

Step:  100   Loss:  7.811537265777588   Current Suc rate:  0.2
Step:  110   Loss:  7.403205871582031   Current Suc rate:  0.2
Step:  120   Loss:  7.0217485427856445   Current Suc rate:  0.2
Step:  130   Loss:  6.647172451019287   Current Suc rate:  0.2
Step:  140   Loss:  6.154144763946533   Current Suc rate:  0.25
Current suc. rate:  0.303125
Iter:  48
Traceback (most recent call last):
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/I                          mageFile.py", line 237, in load
    s = read(self.decodermaxblock)
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/P                          ngImagePlugin.py", line 896, in load_read
    cid, pos, length = self.png.read()
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/P                          ngImagePlugin.py", line 162, in read
    length = i32(s)
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/_                          binary.py", line 75, in i32be
    return unpack_from(">I", c, o)[0]
struct.error: unpack_from requires a buffer of at least 4 bytes for unpacking                           4 bytes at offset 0 (actual buffer size is 0)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "infod_sample.py", line 191, in <module>
    images, labels = normal_iter.next()
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch                          /utils/data/dataloader.py", line 435, in __next__
    data = self._next_data()
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch                          /utils/data/dataloader.py", line 475, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch                          /utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch                          /utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch                          vision/datasets/folder.py", line 151, in __getitem__
    sample = self.loader(path)
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch                          vision/datasets/folder.py", line 188, in default_loader
    return pil_loader(path)
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch                          vision/datasets/folder.py", line 170, in pil_loader
    return img.convert('RGB')
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/I                          mage.py", line 904, in convert
    self.load()
  File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/I                          mageFile.py", line 243, in load
    raise OSError("image file is truncated") from e
OSError: image file is truncated

Can anyone help me?

Thanks a lot.

Questions about the paper

Hi, thanks for your interesting work! I have two questions about the paper.

  1. It seems that you borrow the pipeline of JPEG compressor, which aims to compress the image without much quality degradation. From your paper, AdvDrop applies DCT and other operations in 'RGB' space while JPEG does image compression on 'YCbCr' space. Am I right? If I am right, why do not you do information drop in 'YCbCr' space? what is the difference?
  2. In your code, you use torch.floor( ) to process the quantized block which is inconsistent with the Figure 5. Do you think the torch.floor() should be replaced by torch.round() or have you tried torch.round() before?

feature squeeze 检测器

请问,您尝试过生成大量对抗样本,使用特征压缩中设计的检测器进行评估了吗?就是检测性能如何,我看您防御实验是通过添加防御模型进行攻击成功率的评估。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.