Git Product home page Git Product logo

danielzuegner / gnn-meta-attack Goto Github PK

View Code? Open in Web Editor NEW
140.0 6.0 26.0 1.18 MB

Implementation of the paper "Adversarial Attacks on Graph Neural Networks via Meta Learning".

Home Page: https://www.kdd.in.tum.de/gnn-meta-attack

License: MIT License

Jupyter Notebook 29.45% Python 70.55%
machine-learning meta-learning adversarial-attacks graph-mining deep-learning neural-networks graph-neural-networks graph-neural-network

gnn-meta-attack's Introduction

Adversarial Attacks on Graph Neural Networks via Meta Learning

Implementation of the paper:
Adversarial Attacks on Graph Neural Networks via Meta Learning

by Daniel Zügner and Stephan Günnemann.
Published at ICLR'19, May 2019, New Orleans, USA

Copyright (C) 2019
Daniel Zügner
Technical University of Munich

Requirements

  • Python 3.6 or newer
  • numpy
  • scipy
  • scikit-learn
  • tensorflow
  • matplotlib (for the demo notebook)
  • seaborn (for the demo notebook)

Installation

python setup.py install

Run the code

To try our code, you can use the IPython notebook demo.ipynb.

Contact

Please contact [email protected] in case you have any questions.

References

Datasets

In the data folder we provide the following datasets originally published by

Cora

McCallum, Andrew Kachites, Nigam, Kamal, Rennie, Jason, and Seymore, Kristie.
Automating the construction of internet portals with machine learning.
Information Retrieval, 3(2):127–163, 2000.

and the graph was extracted by

Bojchevski, Aleksandar, and Stephan Günnemann. "Deep gaussian embedding of
attributed graphs: Unsupervised inductive learning via ranking."
ICLR 2018.

Citeseer

Sen, Prithviraj, Namata, Galileo, Bilgic, Mustafa, Getoor, Lise, Galligher, Brian, and Eliassi-Rad, Tina.
Collective classification in network data.
AI magazine, 29(3):93, 2008.

PolBlogs

Lada A Adamic and Natalie Glance. 2005. The political blogosphere and the 2004
US election: divided they blog.

In Proceedings of the 3rd international workshop on Link discovery. 36–43.

Graph Convolutional Networks

Our implementation of the GCN algorithm is based on the authors' implementation, available on GitHub here.

The paper was published as

Thomas N Kipf and Max Welling. 2017.
Semi-supervised classification with graph convolutional networks. ICLR (2017).

Cite

Please cite our paper if you use the model or this code in your own work:

@inproceedings{zugner_adversarial_2019,
	title = {Adversarial Attacks on Graph Neural Networks via Meta Learning},
	author={Z{\"u}gner, Daniel and G{\"u}nnemann, Stephan},
	booktitle={International Conference on Learning Representations (ICLR)},
	year = {2019}
}

gnn-meta-attack's People

Contributors

danielzuegner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gnn-meta-attack's Issues

About attack types

Hi, Dainel:

I read in some other papers that they defined Meta-attack as a grey-box attack. What do you think of this?

When using a surrogate model, we don't have any parameters about the target model. In this case, can't it be counted as a black-box attack? Perhaps it is because of the use of complete datasets, such as training labels.

I'm confused that the definition of grey-box and black-box, in the field of graph adversarial.

I will appreciate if you could share your thoughts with me.

The dataset problem

Hi, Daniel:
The Polblogs dataset cannot be loaded successfully.
Could you please proivde the valid dataset? Thanks so much! :)

Cannot reproduce the performance on citeseer and polblogs

Hi Daniel,

I run your code on different datasets, but I just cannot reproduce the results in the paper.

The reported GCN misclassification rate on citeseer dataset (clean) in your paper is 28.5 ± 0.9, howover, in my implemention (I used pygcn ), the misclassification rate is about 25%.

I run your code to generate 5% perturbed graph and use it as input, then get 73.7% classification accuracy. So on citeseer dataset, with 5% perturbations, the performance of GCN model only drop 1-2%, not 6% as in the paper.

I don't know what is wrong, and I am wondering if there are some more parameters of the attacker I need to tune.

I would really appreciate it if you could help me with this.

Attack variants

Reading the paper and going through the demo, I'm not sure I understand the differences between the attack variants ("Meta-Train", "Meta-Self","A-Meta-Train", "A-Meta-Self", "A-Meta-Both").

Particularly, when using "Meta-Train" should labels_self_training be calculated? Can I run the demo just by modifying the variant definition to be variant = "Meta-Train" and leave everything else as is or should I change anything else?

Questions about codes.

Hi, Daniel:
when the graph has been destroyed, why we need the new model, gcn_after_attack, to evaluate performances of attack, instead of use the gcn_before_attack directly? If both of two models are used, which one is the targeted model mentioned in your paper?
Thanks for your available codes!

Pick perturbation with lowest score

Hi Daniel,

I have a scenario in which I want to perturb the adjacency matrix but I want to do it in a such a way that in every perturbation I choose the edge which has the least impact on Latk, but still increases it (and thus brings GCN's accuracy down). In other words, I want to greedily pick the perturbation e = (u,v) with the lowest score one at a time, that still has a negative impact on the overall accuracy

In the code, is it enough to use the index of the smallest positive number from adjacency_meta_grad instead of adj_meta_grad_argmax = tf.argmax(self.adjacency_meta_grad)?

Thanks in advance!

Confusion about the Loss function L_{atk}

Hello,

I am confused about the loss function in the paper Zuegner et al ICLR 2019.

In the paper, you mention that portfolio_view or portfolio_view and then gradient is
portfolio_view

But, in the algorithm the gradient equation is missing the -ve sign.

image

I am also confused about the same thing in the code. For 'Meta-Train' variant, I think the gradient calculation here is trying to minimize the classification error of the examples in the training here, instead of trying to maximize the classification error.

What am I missing?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.