Hao Jiang; Jintao Yang; Guang Hua*; Lixia Li; Ying Wang; Shenghui Tu; Song Xia
[*: corresponding author]
This repository contains the implementation of our paper FAWA: Fast Adversarial Watermark Attack
If you find this code or the paper useful, please consider citing:
@ARTICLE{fawa,
author={Jiang, Hao and Yang, Jintao and Hua, Guang and Li, Lixia and Wang, Ying and Tu, Shenghui and Xia, Song},
journal={IEEE Transactions on Computers},
title={FAWA: Fast Adversarial Watermark Attack},
year={2021},
volume={},
number={},
pages={1-13},
doi={10.1109/TC.2021.3065172}}
- Clone this repository:
git clone https://github.com/JintaoYang18/FAWA
cd FAWA/
- Install conda envs and requirements:
conda env create -f fawa_env.yaml
pip install -r fawa_requirements.txt
Note: if you don't have a GPU, install the cpu version of PyTorch. (We have not tested this setting.)
-
Prepare your dataset and put it into
100_image_class_950_999_300resize
directory. -
Modify the 3 .txt files in the root directory according to your own data.
Note: We explained the role of each .txt file in detail in the main.py
file.
- Create
pre_trained_models
directories and download pre-trained.pth
files:
mkdir pre_trained_models
cd pre_trained_models/
Download vgg-16 pre-trained .pth file, and put it in the pre_trained_models
directory.
python main.py
Note: The time it takes to generate the image depends on the performance of the computer, and you may have to wait.
The running time can be adjusted by modifying p_size
and g_round
.
You can even reduce the dimension of the problem, such as removing the rotation item.
Note: Use your own model
or pre-trained model
to evaluate fawa adversarial examples.
Note: You can use open source code, such as: