rjduan / advdrop Goto Github PK
View Code? Open in Web Editor NEWCode for "Adversarial attack by dropping information." (ICCV 2021)
License: MIT License
Code for "Adversarial attack by dropping information." (ICCV 2021)
License: MIT License
Hi @RjDuan
Thank you for your work.
While creating components
in following code, RGB channels of the images are being treated as YCbCr channels. If this observation is correct, could you please let me the reason behind re-naming RGB channels to YCbCr channels. Or someone should actually convert an image from RGB to YCbCr color space.
Line 72 in 35ceeb0
Thank you.
Hi, thanks for your work. It is very interesting. But I have a question about mathmatical definition of DCT in your paper: What are the meanings of parameters ‘u, v, k, m’ in the formula? Looking forward to your reply, thanks!!!
Hi, I run your code and save the adversarial and original images. But I found that the size of adversarial image(such as 12.5KB) is bigger than original images(such as 12.4KB), is it reasonable? Then I calculated the type of color of adversarial and original images, the adversarial image has more colors than original image. It seems violating the article.
Hi, as you said in your paper (Section 3.4 Quantization) --- 'As the quantization table q should be integers during the optimization'. Do you think the data type of q table really affects the results? I mean what will happen if the elements in the q-table are float numbers. Thanks!
@RjDuan 您好!首先感谢您的工作!
我在按照README.md中的指示运行您提供的代码(infod_sample.py)时遇到了这个问题:
D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
Iter: 0
Traceback (most recent call last):
File "D:\CCCCCCCCCCCCCCCCCCC\Python for PyCharm\AdvDrop\AdvDrop-main\infod_sample.py", line 201, in <module>
attack = InfoDrop(resnet_model, batch_size=batch_size, q_size=q_size, steps=150, targeted=True)
File "D:\CCCCCCCCCCCCCCCCCCC\Python for PyCharm\AdvDrop\AdvDrop-main\infod_sample.py", line 55, in __init__
self.q_tables = {"y": torch.from_numpy(q_ini_table),
File "D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchattacks\attack.py", line 496, in __setattr__
for num, value in enumerate(get_all_values(value)):
File "D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchattacks\attack.py", line 488, in get_all_values
yield from get_all_values(item, stack)
File "D:\Program Files\Anaconda3\envs\pytorch\lib\site-packages\torchattacks\attack.py", line 482, in get_all_values
if (items not in stack):
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
我发现已经关闭的issue中有一个人遇到了和我一样的的问题,但是他没有说明具体的解决方法,因此我重新开了一个issue。
我使用的软件包版本:
python 3.9.16
torch 1.13.1+cu116
numpy 1.23.5
torchattacks 3.4.0
希望能够得到您的解答!
Hi @RjDuan
Thank you for excellent work. I have a question regarding initialization and update of Q-table.
(1) In paper, you have mentioned that Q-table is initialized with value of 1 and values are gradually increased during optimization (reference line: after Eq-1 in the paper).
However, in the code Q-table is being initialized with a user-provided value e.g. 40
Line 52 in 35ceeb0
(2) Eq-7 in the paper states that q is updated as follows:
Could you explain if Eq-7 corresponds correctly to the relevant part of the code:
Lines 108 to 110 in 35ceeb0
I have noted following perceived inconsistencies:
a) Negative sign is being used in the update of q-table in code, while Eq-7 uses + sign.
b) Eq-7 with [5, q_size]
i.e.
Could you help me clear the above mentioned confusions. Thank you!
An interesting work. A minor bug was found in the implementation of phi_diff which did not follow the equation 6 of original paper.
phi_x = torch.tanh((x - (torch.floor(x) + 0.5)) * k) * s
hello~, 感觉您这篇论文很棒棒,想复现一下,冒昧的问下您的代码何时会开源呢? 或者可否私发我一下,想细致学习一下~ 期待回复!!!
我的email: [email protected]
I downloaded the data set, ran the code, and encountered the following error:
RuntimeError: The size of tensor a(7) must match the size of tensor b (20) at non-singleton dimension 2
你好,请问关于table 2中用到的防御模型AT, Feature Squeeze, JPEG-30和PD的代码能分享一下吗
感觉您这篇论文也很有意思,想看一下具体地实现流程,代码和实验用的相关图片数据能不能发一下呢。邮箱:[email protected]谢谢
I want to know where is the epsilon ? Maybe in the code,the epsilon is q_size?
Hi, I used the dataset and code you provided to make adversarial examples. Running the program, the correct rate of the adversarial samples is 100% (q=100), but the adversarial examples are sent back to the network for classification, and half of the adversarial examples do not change the original correct classification results.I don't know if you or anyone else has this problem.
Thanks a lot.
After I ran python infod_sample.py
, it ran normally in the beginning. However, there occurred this error when iter
is 48:
Step: 100 Loss: 7.811537265777588 Current Suc rate: 0.2
Step: 110 Loss: 7.403205871582031 Current Suc rate: 0.2
Step: 120 Loss: 7.0217485427856445 Current Suc rate: 0.2
Step: 130 Loss: 6.647172451019287 Current Suc rate: 0.2
Step: 140 Loss: 6.154144763946533 Current Suc rate: 0.25
Current suc. rate: 0.303125
Iter: 48
Traceback (most recent call last):
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/I mageFile.py", line 237, in load
s = read(self.decodermaxblock)
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/P ngImagePlugin.py", line 896, in load_read
cid, pos, length = self.png.read()
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/P ngImagePlugin.py", line 162, in read
length = i32(s)
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/_ binary.py", line 75, in i32be
return unpack_from(">I", c, o)[0]
struct.error: unpack_from requires a buffer of at least 4 bytes for unpacking 4 bytes at offset 0 (actual buffer size is 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "infod_sample.py", line 191, in <module>
images, labels = normal_iter.next()
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch /utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch /utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch /utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch /utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch vision/datasets/folder.py", line 151, in __getitem__
sample = self.loader(path)
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch vision/datasets/folder.py", line 188, in default_loader
return pil_loader(path)
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torch vision/datasets/folder.py", line 170, in pil_loader
return img.convert('RGB')
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/I mage.py", line 904, in convert
self.load()
File "/home/miniconda3/envs/pytorch17/lib/python3.8/site-packages/PIL/I mageFile.py", line 243, in load
raise OSError("image file is truncated") from e
OSError: image file is truncated
Can anyone help me?
Thanks a lot.
Hi, thanks for your interesting work! I have two questions about the paper.
请问,您尝试过生成大量对抗样本,使用特征压缩中设计的检测器进行评估了吗?就是检测性能如何,我看您防御实验是通过添加防御模型进行攻击成功率的评估。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.