Git Product home page Git Product logo

Comments (5)

amad-person avatar amad-person commented on June 5, 2024

Hi @xehartnort, thanks for opening this issue.

Could you suggest me which learning rate is more adequate?

The learning rate for the attack model does not affect the attack accuracy as much as the attack features from the target model do, so you can try to experiment with different learning rates and see what works for your use case.

Moreover, in the appendix A of the paper [1] there is a description of the Architecture of the attack model, but such description doesn't match the implementation shown in this repository.

Could you expand on what doesn't match? Are you talking about the FCN and CNN components generation or something else?

from ml_privacy_meter.

xehartnort avatar xehartnort commented on June 5, 2024

Firstly, I want to thank you for your quick answer :)

This is my guess on why the implementation doesn't match the paper:

Attack model layers are created in function create_attack_components(self, layers)

    def create_attack_components(self, layers):
        """
        Creates FCN and CNN modules constituting the attack model.  
        """
        model = self.target_train_model

        # for layer outputs
        if self.layers_to_exploit and len(self.layers_to_exploit):
            self.create_layer_components(layers)

        # for one hot encoded labels
        if self.exploit_label:
            self.create_label_component(self.output_size)

        # for loss
        if self.exploit_loss:
            self.create_loss_component()

        # for gradients
        if self.gradients_to_exploit and len(self.gradients_to_exploit):
            self.create_gradient_components(model, layers)

        # encoder module
        self.encoder = create_encoder(self.encoderinputs)

The functions for the one hot encoded labels component, the loss component and the enconder module almost match the description given in the Appendix A. But all of them are missing the 0.2 dropout, in the loss component the sizes of the layers are not the ones given in the appendix, and in the enconder module the activation function of the last fully connected layer is sigmoid, however in the appendix is relu.

When it comes to the gradient component there are two functions: cnn_for_cnn_gradients(input_shape) and cnn_for_fcn_gradients(input_shape). The former fits what is written in the appendix but the latter is quite different, so different that no description given in the appendix matches it. Something similar happens with the layer outputs components.

I can see that those differences may not to be significant, but I want to understand why those changes were introduced in the implementation and not referenced in the paper.

Just one bug I think I have found: some layers are named and it might produce a conflict when such layer is used twice in the attack. It happens in the file create_cnn.py, function cnn_for_cnn_gradients

from ml_privacy_meter.

amad-person avatar amad-person commented on June 5, 2024

Thanks for clarifying @xehartnort.

I can see that those differences may not to be significant, but I want to understand why those changes were introduced in the implementation and not referenced in the paper.

I don't think there is any particular reason for these differences. You can use either of the implementations to carry out the attacks.

Just one bug I think I have found: some layers are named and it might produce a conflict when such layer is used twice in the attack. It happens in the file create_cnn.py, function cnn_for_cnn_gradients

I will take a look at this, thanks!

from ml_privacy_meter.

xehartnort avatar xehartnort commented on June 5, 2024

Hi,

thank you for your clarification. However, I want to reproduce the results obtained in [1], therefore I need to know which version was used in such publication. Moreover, there are some missing details of the implentation in the reference [1]. Maybe @rzshokri can help us here.

[1] Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning (https://arxiv.org/abs/1812.00910)

from ml_privacy_meter.

xehartnort avatar xehartnort commented on June 5, 2024

Hi @rzshokri, @amad-person

why did you close this issue? The problem is not solved by any means.

from ml_privacy_meter.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.