Git Product home page Git Product logo

zhengxiangshi / dept Goto Github PK

View Code? Open in Web Editor NEW
83.0 2.0 13.0 3.8 MB

[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"

Home Page: http://arxiv.org/abs/2309.05173

License: MIT License

Python 100.00%
fine-tuning language-model large-language-models natural-language-processing parameter-efficient-tuning peft prompt-tuning parameter-efficient-fine-tuning nlp nlp-machine-learning

dept's People

Contributors

eltociear avatar zhengxiangshi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

dept's Issues

Some tensors share memory, this will lead to duplicate memory

This is the part of my log file where the error happens. I guess this error happened because of an incompatible library. Could you give me information about the version of transformers you use?. It would be helpful if you release the requirements.txt file

2%|▎ | 1000/40000 [15:49<10:18:48, 1.05it/s]{'loss': 0.5651, 'learning_rate': 0.0005, 'epoch': 4.35}
{'loss': 0.2853, 'learning_rate': 0.0004936708860759494, 'epoch': 8.7}

0%| | 0/7 [00:00<?, ?it/s]�[A

29%|██▊ | 2/7 [00:00<00:02, 2.49it/s]�[A

43%|████▎ | 3/7 [00:01<00:02, 1.75it/s]�[A

57%|█████▋ | 4/7 [00:02<00:01, 1.51it/s]�[A

71%|███████▏ | 5/7 [00:03<00:01, 1.40it/s]�[A

86%|████████▌ | 6/7 [00:04<00:00, 1.35it/s]�[A

100%|██████████| 7/7 [00:04<00:00, 1.64it/s]�[A

�[A
2%|▎ | 1000/40000 [15:54<10:18:48, 1.05it/s]

100%|██████████| 7/7 [00:04<00:00, 1.64it/s]�[A

                                         �[A{'eval_loss': 0.27034154534339905, 'eval_f1': 81.3953488372093, 'eval_accuracy': 68.62745098039215, 'eval_runtime': 5.1909, 'eval_samples_per_second': 39.299, 'eval_steps_per_second': 1.349, 'epoch': 8.7}

Traceback (most recent call last):
File "/home/trunghn/DecomposedPromptTuning/train.py", line 704, in
main()
File "/home/trunghn/DecomposedPromptTuning/train.py", line 659, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/transformers/trainer.py", line 1914, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/transformers/trainer.py", line 2279, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/transformers/trainer.py", line 2355, in _save_checkpoint
self.save_model(staging_output_dir, _internal_call=True)
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/transformers/trainer.py", line 2849, in save_model
self._save(output_dir)
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/transformers/trainer.py", line 2905, in _save
safetensors.torch.save_file(state_dict, os.path.join(output_dir, SAFE_WEIGHTS_NAME))
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/safetensors/torch.py", line 281, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
File "/home/trunghn/miniconda3/envs/ie/lib/python3.10/site-packages/safetensors/torch.py", line 467, in _flatten
raise RuntimeError(
RuntimeError:
Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'base_model.decoder.embed_tokens.weight', 'base_model.lm_head.weight', 'base_model.encoder.embed_tokens.weight', 'base_model.shared.weight', 'word_embeddings.weight'}].
A potential way to correctly save your model is to use save_model.
More information at https://huggingface.co/docs/safetensors/torch_shared_tensors

Runing Time

Hello! I am a newbie in NLP. Thank you very much for the inspiration from your outstanding work! I got a warning that the lower version might lead to slower running speed. I would like to know how long it takes to run a dataset, so that I can better choose and install the versions of software and hardware. Looking forward to your response!

LLaMA 2 finetuning

Thanks for the good work and clear/neat implementation. I have some questions regarding the implementation of LLaMA 2

  • Q.1 I can see in train.py file that you support LLaMA 2 model, however, the trainer in line 639 in train.py generates an object from the PEFTSeq2SeqTrainer() class. Later in line 659, you call the method.train. When I checked the implementation of the class PEFTSeq2SeqTrainer() it seems it does not have the train method. does this mean for LLaMA (deocder based model) you do not support finetuning of LLaMA model, or am missing something ?.

  • Q.2 Also in the same class in line 75, generation_inputs = inputs[self.model.main_input_name] is called. So my questions is does LLaMA2 has main_input_name?

  • Q.3 is the code runnable for LLaM2 on the all datasets provided, as I can see in the paper you have the results only for sst2 dataset?

Thanks and looking for your response.

About the max_length of SuperGLUE-MultiRC dataset

Hello, thank you for the nice work! I would like to know, why the max length of the SuperGLUE-MultiRC dataset is 348 rather than 256? Is there any findings behind this setting? Besides, will the number of trainable parameters increase using max length=384 while the number of appended prompts remains 60 and the number of r for lora remains 30? Thank you very much!

About the hyperparameters

Could you please tell me which one is the correct hyperparameter, those in the paper or those provided in the readme, such as "weight decay"、“batch size" and "warm up steps"?

General question about padding in the setting of soft-prompt tuning

Hello Authors,

I have a general question about padding in soft-prompt tuning setting.

In a batch when sequences are different length, typically we left-pad the smaller sequences like

# first example is left padded, batch size is set to two.
input_tokens = [ 
[<pad>, <pad>,  "my" "name"], 
["where", "are", "you", "from"]
] 

And then, do you add soft-prompt embedding to the embeddings of the above padded tokens like below?

# se1, se2 are soft-prompt embeddings that needs to be tuned
input_embedding = [ 
[ se1, se2, Embedding(<pad>), Embedding(<pad>), Embedding("my"), Embedding("name"), ],
[ se1, se2, Embedding("where"), Embedding("are"), Embedding("you"), Embedding("from"), ],
]

Is my understanding right? Section 2.1 in the paper mentions the X to be padded using max sequence length.

Also other concern I have is, shouldnt we prepend the soft-prompt embeddings to the unpaded user embeddings and then pad the smaller sequences? Like,

# please note the change in the first example
input_embedding = [ 
[ Embedding(<pad>), Embedding(<pad>), se1, se2, , Embedding("my"), Embedding("name"), ],
[ se1, se2, Embedding("where"), Embedding("are"), Embedding("you"), Embedding("from"), ],
]

Can you please share your insights on how padding is done while doing soft-prompt tuning in general? Thank you for clarification.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.