Comments (5)
Hi @xehartnort, thanks for opening this issue.
Could you suggest me which learning rate is more adequate?
The learning rate for the attack model does not affect the attack accuracy as much as the attack features from the target model do, so you can try to experiment with different learning rates and see what works for your use case.
Moreover, in the appendix A of the paper [1] there is a description of the Architecture of the attack model, but such description doesn't match the implementation shown in this repository.
Could you expand on what doesn't match? Are you talking about the FCN and CNN components generation or something else?
from ml_privacy_meter.
Firstly, I want to thank you for your quick answer :)
This is my guess on why the implementation doesn't match the paper:
Attack model layers are created in function create_attack_components(self, layers)
def create_attack_components(self, layers):
"""
Creates FCN and CNN modules constituting the attack model.
"""
model = self.target_train_model
# for layer outputs
if self.layers_to_exploit and len(self.layers_to_exploit):
self.create_layer_components(layers)
# for one hot encoded labels
if self.exploit_label:
self.create_label_component(self.output_size)
# for loss
if self.exploit_loss:
self.create_loss_component()
# for gradients
if self.gradients_to_exploit and len(self.gradients_to_exploit):
self.create_gradient_components(model, layers)
# encoder module
self.encoder = create_encoder(self.encoderinputs)
The functions for the one hot encoded labels component, the loss component and the enconder module almost match the description given in the Appendix A. But all of them are missing the 0.2 dropout, in the loss component the sizes of the layers are not the ones given in the appendix, and in the enconder module the activation function of the last fully connected layer is sigmoid, however in the appendix is relu.
When it comes to the gradient component there are two functions: cnn_for_cnn_gradients(input_shape) and cnn_for_fcn_gradients(input_shape). The former fits what is written in the appendix but the latter is quite different, so different that no description given in the appendix matches it. Something similar happens with the layer outputs components.
I can see that those differences may not to be significant, but I want to understand why those changes were introduced in the implementation and not referenced in the paper.
Just one bug I think I have found: some layers are named and it might produce a conflict when such layer is used twice in the attack. It happens in the file create_cnn.py, function cnn_for_cnn_gradients
from ml_privacy_meter.
Thanks for clarifying @xehartnort.
I can see that those differences may not to be significant, but I want to understand why those changes were introduced in the implementation and not referenced in the paper.
I don't think there is any particular reason for these differences. You can use either of the implementations to carry out the attacks.
Just one bug I think I have found: some layers are named and it might produce a conflict when such layer is used twice in the attack. It happens in the file create_cnn.py, function cnn_for_cnn_gradients
I will take a look at this, thanks!
from ml_privacy_meter.
Hi,
thank you for your clarification. However, I want to reproduce the results obtained in [1], therefore I need to know which version was used in such publication. Moreover, there are some missing details of the implentation in the reference [1]. Maybe @rzshokri can help us here.
[1] Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning (https://arxiv.org/abs/1812.00910)
from ml_privacy_meter.
why did you close this issue? The problem is not solved by any means.
from ml_privacy_meter.
Related Issues (20)
- Attack S and Attack P cant be reproduced HOT 3
- Question regarding ussage of ModelIntermediateOutput class in information_source_signal.py HOT 1
- Time HOT 1
- Can't achieve a better accuracy than 0.5121 with the blackbox tutorial: Running the Alexnet CIFAR-100 Attack HOT 5
- Pytorch implementation HOT 2
- pip install -r requirements.txt throws: ERROR: No matching distribution found for tensorflow-gpu==2.5.0 HOT 1
- Can't exploit gradients of ResNet-20 HOT 4
- can i attack linear regression、logistic、XGBoost
- can i attack linear regression、logistic、XGBoost models? HOT 1
- attacking convolutional layer's gradient - shape mismatch HOT 5
- MIA blackbox attack accuracy repeats same value HOT 3
- Code of "MIA via Distillation" HOT 1
- Blackbox attack of a basic binary TensorFlow classifier with tabular data HOT 1
- Request for FL and Unsupervised Learning Version HOT 1
- A question for attack_alexnet.py. HOT 1
- Old tutorials with restructured code HOT 1
- Add conda recipe HOT 8
- FileNotFoundError: [Errno 2] No such file or directory: '../privacy_meter/report_files/explanations.json' HOT 2
- Bug in notebook examples that use PyTorch models HOT 1
- Enhanced MIA HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ml_privacy_meter.