Git Product home page Git Product logo

mpcformer's People

Contributors

dachengli1 avatar rulinshao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mpcformer's Issues

Question about results

Hello, thanks for a great job.
I have a question about the results in Table 2 and 4.
Are the acc results in these tables obtained by conducting MPC inference on encrypted data or obtained on original test data?
Do you test how mpc process affects acc performance?

Question about "{d}" and "{p,d}"

As is written in your article:.“p” stands for using weights in T as initialization, “d” stands for applying knowledge distillation with T as the teacher.
My question is:Does “using weights in T as initialization” mean fine-tuned model? E.g.”p” stands for “fine-tuning”, namely, 1.MPCBert-B stands for the most basic pre-trained transformer, 2.MPCBert-Bw/o{d} stands for applying KD on the most basic pre-trained transformer, 3. MPCBert-Bw/o{p,d} stands for applying KD on fine-tuned transformer?

Fail to reproduce the results in paper

Hey, @DachengLi1 . I try to reproduce the experiment results reported in the paper and found some mismatches.
image

  1. MPCFormer of Bert-base on RTE task only achieves accuracy of 0.59, which should be 0.64. In comparison, Bert-base on STSB achieves 0.797, which is close to 0.803 in the paper. I use the defaut hyperparameter for RTE, do you have any ideas for this mismatch?
  2. MPCFormer of Bert-large seems to diverge significantly from the paper. On RTE task, the acc drops to 0.47. I use quad+2quad approximation. I wonder if this is the root cause since the paper reported to use quad+2relu for Bert-large. Have you studied this point?

Has confidential computing achieved loading and inference of transformer models?

After reading and studing your code, It seems that the secure inference stege right now is not implemented?
By finishing MPCformer stege, I can only get a great model, with replacing the softmax and activation, but the model now can't be directly applied to your code for further research? is secure inference has been implemented?
I am looking forward to your reply !

The impacts of BatchNorm instead of LayerNorm

In the CrypTen-based MPCFormer, such as MPC-Bert, I noticed that the LayerNorm is replaced by BatchNorm (due to CrypTen does not support LayerNorm now). Will this modification influence Bert's performance (e.g., accuracy)? Due to the source code did not give the model-loading scripts, it might be difficult for me to check it in experiments. Thanks for your help.

Clarification on Speed Enhancement through Distilled Models in CrypTen and Accuracy Testing Procedure

Following the instructions in your README file, I noticed that the code snippets provided for testing with CrypTen seem to only initialize weights randomly without leveraging any pre-trained models. Could you please clarify how the distilled models are supposed to be applied in this context?

Additionally, I am puzzled by the accuracy testing mentioned in your paper. Specifically, are the reported accuracy figures obtained through secure computation in CrypTen between two parties? If so, the time expenditure seems astonishingly high.

I believe understanding these aspects would greatly enhance my comprehension of the practical applications of your work and its implications for secure and efficient model inference.

Thank you very much for addressing my inquiries.

Roberta experiment failed

Hey, @DachengLi1 . I try to run the distillation experiments for Roberta-base model. However, I notice that you have commented out TinyRobretaxxx model. I try to run this model and several errors show up, e.g., is_torch_greater_than_1_6 is missing, self.position_embeddings = nn.Embedding( config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx ) incurs error: `embedding(): argument 'indices' (position 2) must be Tensor, not NoneType

Do you have any ideas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.