Git Product home page Git Product logo

abhijay9 / attacking_perceptual_similarity_metrics Goto Github PK

View Code? Open in Web Editor NEW
5.0 1.0 1.0 1.84 MB

[TMLR 2023] as a featured article (spotlight :star2: or top 0.01% of the accepted papers). In this study, we systematically examine the robustness of both traditional and learned perceptual similarity metrics to imperceptible adversarial perturbations.

Home Page: https://openreview.net/forum?id=r9vGSpbbRO

License: BSD 2-Clause "Simplified" License

Python 76.11% Jupyter Notebook 23.89%
adversarial-attacks adversarial-robustness full-reference-image-quality-assessment full-reference-iqa image-quality-assesment iqa perceptual-similarity transferable-attacks

attacking_perceptual_similarity_metrics's Introduction

Attacking Perceptual Similarity Metrics

Abhijay Ghildyal, Feng Liu. In TMLR, 2023. (Featured Certification)

[OpenReview] [Arxiv]

In this study, we systematically examine the robustness of both traditional and learned perceptual similarity metrics to imperceptible adversarial perturbations.

Figure (above): $I_1$ is more similar to $I_{ref}$ than $I_{0}$ according
to all perceptual similarity metrics and humans. We attack
$I_1$ by adding imperceptible adversarial perturbations ($\delta$)
such that the metric ($f$) flips its earlier assigned rank, i.e.,
in the above sample, $I_0$ becomes more similar to $I_{ref}$.


Figure (above): An example of the PGD attack on LPIPS(Alex)

Requirements

Requires Python 3+ and PyTorch 0.4+. For evaluation, please download the data from the links below.

When starting this project, I used the requirements.txt (link) from the LPIPS repository (link). We are grateful to the authors of various perceptual similarity metrics for making their code and data publicly accessible.

Downloads

The transferable adversarial attack samples generated for our benchmark in Table 5 can be downloaded from this google drive folder (link). Please unzip transferableAdvSamples.zip in the datasets/ folder.

Alternatively, you can use the following:

cd datasets
gdown 1gA7lD7FtvssQoMQwaaGS_6E3vPkSf66T # get <id> from google drive (see below)
unzip transferableAdvSamples.zip

In case the gdown id changes, you can obtain it from the 'shareable with anyone' link for transferableAdvSamples.zip file in the aforementioned Google Drive folder. The id will be a substring in the shareable link, as shown here: https://drive.google.com/file/d/<id>/view?usp=share_link.

Download the LPIPS repo (link), outside this folder. Then, download the BAPPS dataset as mentioned here: link.

Benchmark

Use the following to benchmark various metrics on the transferable adversarial samples created by attacking LPIPS(Alex) on BAPPS dataset samples via stAdv and PGD.

# L2
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric l2 --save l2

# SSIM
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric ssim --save ssim

# ST-LPIPS(Alex)
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric stlpipsAlex --save stlpipsAlex

The results will be stored in the results/transferableAdv_benchmark/ folder.

Finally, use the ipython notebook results/study_results_transferableAdv_attack.ipynb to calculate the number of flips.

Creating Transferable Adversarial Samples

The following steps were performed to create the transferable adversarial samples for our benchmark.

  1. Create adversarial samples by attacking LPIPS(Alex) via the spatial attack stAdv.
CUDA_VISIBLE_DEVICES=0 python create_transferable_stAdv_samples.py
  1. We perform a visual inspection of the samples before proceeding and weed out some of the samples that do not meet our criteria of imperceptibility.

  2. Using the samples selected in step 2, we attack LPIPS(Alex) via $\ell_\infty$-bounded PGD with different max iterations.

CUDA_VISIBLE_DEVICES=0 python create_transferable_PGD_samples.py
  1. Finally, we combine the stAdv and PGD attacks by attacking the samples created via stAdv.
CUDA_VISIBLE_DEVICES=0 python create_transferable_stAdvPGD_samples.py

We hope the above code is able to assist and inspire additional studies to test the robustness of perceptual similarity metrics through more extensive benchmarks using various datasets and stronger adversarial attacks.

Whitebox PGD attack

To perform the whitebox PGD attack run the following

CUDA_VISIBLE_DEVICES=0 python whitebox_attack_pgd.py --metric lpipsAlex --save lpipsAlex --load_size 64

The results are saved in results/whitebox_attack/.

Finally, use the ipython notebook results/study_results_whitebox_attack.ipynb to calculate the number of flips and other stats.

We provide code to perform the reverse of our attack (see Appendix F), i.e., we attack the less similar of the two distorted images to make it more similar to the reference image.

CUDA_VISIBLE_DEVICES=0 python whitebox_toMakeMoreSimilar_attack_pgd.py --metric lpipsAlex --save lpipsAlex --load_size 64

To add. Code for FGSM attack, and Benchmark on PIEAPP dataset.

Citation

If you find this repository useful for your research, please use the following to cite our work:

@article{ghildyal2023attackPercepMetrics,
  title={Attacking Perceptual Similarity Metrics},
  author={Abhijay Ghildyal and Feng Liu},
  journal={Transactions on Machine Learning Research},
  year={2023}
}

attacking_perceptual_similarity_metrics's People

Contributors

abhijay9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

peterzs

attacking_perceptual_similarity_metrics's Issues

Reverse attack on classic metrics

Nice work, especially the discussion of reverse attack in Appendix F! As I understand, the reverse attack makes more sense because it causes the conflict between the model and human decisions: distorting the image to make it cleaner (i.e., more similar to the original image).
So I am very curious whether the reverse attack also works on classic metrics, e.g., SSIM.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.