Git Product home page Git Product logo

breaching's People

Contributors

jonasgeiping avatar lhfowl avatar mvnowak avatar yuxinwenrick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

breaching's Issues

Robbing The Fed produces 0 hits on custom model

My model is a multi-output model where I combine losses from segmentation and classification for final loss and back-prop. I have been working to get my model fitted to Robbing the Fed attack. All my parameters seem the same as that of the given example except that I am performing segmentation+classification instead of just classification and my images are 1x512x512 (single channel images) instead of 3x224x224.

What could be the reason that I am not getting any hits?

edit: I was comparing my model with the one that is used in the example file and I noticed that my model.buffer values are very very large, some going upto 10^9. What could be the reason for this?

hardcoded wrong md5 checksum

Here uses wrong hardcoded md5, e.g. for tinyimagenet, md5 for val dataset is: c5d7f7e71e4c0fc882b9ca5ce70ffed2, this results re-extracting every time when loading the dataset.

Wrong index of buffers in analysis.py

When set user.provide_buffers=True, the user adds model buffers in true_user_data["buffers"]->[Tensors].
However, in analysis.py, it iterates through true_user_data["buffers"][idx]->Tensor. The [idx] does not needed and should be removed.

for buffer, user_state in zip(model.buffers(), true_user_data["buffers"][idx]):

for buffer, user_state in zip(model.buffers(), true_user_data["buffers"]):

Questions about federated learning in Inverting Gradients

Dear Geiping,

Thank you for providing the implementation of paper "Inverting Gradients - How easy is it to brak privacy in federated learning?". After reading the paper and code, I have some questions regarding to the implementation:

  1. The paper mentions if a neural network contains a fully-connected layer, it is possible to reconstruct the input of the network from the network's gradients. I'm confuse about the description, does this means for any neural network containing fully-connected layer in the last, the input can be calculated by the overall gradients of the entire model, instead of only using the gradient of the last fully-connected layer?

  2. In the experiment of trained VS. untrained networks in page 6 of the paper, may I ask that the untrained model is a pretrained model rather than the model with the weights randomly initialized? For example, in the experiment in Table 1, the untrained models means the models' weight values are initialized randomly, while the trained models were trained with CIFAR-10 dataset?

  3. I tried to apply the federated learning part in inverting gradient, I do the following modification:

    1. change user to multiuser_aggregate in breaching/config/case/4_fedavg_small_scale.yaml
    2. set default_clients to 10 in breaching/case/data/CIFAR10.yaml

    However, I got the following error message after generating the dataset for training:
    "This user would have no data under the chosen partition, user id and number of clients.", which maybe caused the dataset is split more than 10 clients. Apart from the settings above, which files should I modify to apply multiple clients in federated learning combined with inverting gradients?

image

Best Regards,
Rahn

Unexpected change to server model in benchmark

Hi

In breaching.analysis.report() function, it directly uses the server.model to load the parameters and buffers. However, this may cause wrong modification to server.model and influences the next run in benchmark. It may be possible to resolve the problem by adding model=copy.deepcopy(model) before model.to(**setup)

    model = copy.deepcopy(model)
    model.to(**setup) 

Implementing too long time using 4090

When I implemented following code, it would last very long time (more than 2 hours, still no output ):

user, server, model, loss_fn = breaching.cases.construct_case(cfg.case, setup)

I use 4090 for executing a TAG attack by the jupyter notebook provided by you.

Is this situation normal and common?

Examples for text + classification task

All text examples cover MLM task. What should I do to, for example, run an attack against Bert in a classification task using the Cola dataset?

I tried to change the Decepticons - Analytic Attack - BERT on Wikitext.ipynb example by adding

cfg.case.data.name = 'cola'
cfg.case.data.task = 'classification'

But then when I run it I get the following error when the dataset is loaded

raw_dataset = Dataset.from_dict({k: [v] for k, v in raw_datapoint.items()})
AttributeError: 'Dataset' object has no attribute 'items'. Did you mean: 'iter'?

Is there something else I need to set?

adding the GroupRegistration regularization term for "See through gradients" attack

Problem and context

As I am working on extending gradient inversion attacks, I came across this wonderful library. In an attempt to reproduce Yin. et al paper, I found out about the missing regularization term (as per title) in the final Notes of the breaching/examples/See through gradients [...].ipynb. I would like to try and reproduce the results of Yin et al. in order to provide baselines for comparison against other regularization metrics. The main obstacle in implementing this term seems to be a cluttered description of it in Section 3.4 of the above mentioned paper.

Steps towards solution

Regardless of the actual value of \alpha_{group} (not disclosed by the authors, as far as I know) I believe a possible implementation of the GroupRegistration regularization term can be achieved in the following few steps:

  1. Create a dummy image x_g, for all g in G
  2. Compute the per-pixel average over |G| and call it target image x_t
  3. Compute the registration F(x_g, x_t), i.e. the linear transformation that matches certain features of x_g with x_t. Do it for every g in G. The feature matching/transformation function F is based on RANSAC-flow.
  4. Average all the F(x_g, x_t) over g in G and call it E[x_g]
  5. Compute the 2-norm of the difference between x_g and E[x_g].

To my understanding, this is the meaning of Section 3.4 and the plot in Figure 3 of the above mentioned paper.

Additional comments

My research would benefit from having this component implemented, and I believe it could have a broader impact in giving the possibility to reproduce one of the SOTA results in gradient inversion attacks to other researchers as well. For this reason I would like to take on this issue. Disclaimer: This would be my first contribution to a public, research repository.

Questions about dataset configuration and attack initialization

In my use case, all aspects of my model are custom and I used a medical image dataset to train my model. I am using the minimal_example.py file as a template for designing the attack but I have a few trivial questions: -

  1. In the mean and std sections of data_cfg_default, what are the three numbers supposed to mean?
  2. For a custom model, what other parameters am I supposed to change (other than model definition, dataset, and loss function)?

Getting positions

Hi,

Thanks for providing the implementation for DECEPTICONS: CORRUPTED TRANSFORMERS BREACH PRIVACY IN FEDERATED LEARNING FOR LANGUAGE MODELS.

I tried to look for the code in where they extract the positional information for the extracted tokens but with no luck. Can you please help me point out where is it? Thanks a lot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.