Comments (5)
@bkuster0 Thank you very much for your information. I have read it in detail. The fine-tuning of the CogVLM2 model does not support Zero3. But I still want to know what is the difference between the model supporting Zero3 and not supporting Zero3, or how should I modify the model to make it support Zero3. I hope someone can give me an answer to this question. I would be very grateful.
from deepspeed.
@tiandazhao, if using the HF+DeepSpeed integration properly then zero stage 3 would be detected at module initialization so that parameters are sharded across devices: https://huggingface.co/docs/transformers/main/en/deepspeed#non-trainer-deepspeed-integration
So, your observation is surprising. Are you able to share the log?
from deepspeed.
@tiandazhao, if using the HF+DeepSpeed integration properly then zero stage 3 would be detected at module initialization so that parameters are sharded across devices: https://huggingface.co/docs/transformers/main/en/deepspeed#non-trainer-deepspeed-integration
So, your observation is surprising. Are you able to share the log?
Since the logger generated in the process is quite messy, I will reorganize my questions.
Add Deepspeed's estimate of video memory usage: stage2
Add Deepspeed's estimate of video memory usage: stage3
When Transformers.trainer is initialized,Will enter the training loop and execute the self.accelerator.prepare method
Next, the deepspeed will initillize
Next, will create DeepSpeedEngine
then ,The _configure_distributed_model method in DeepSpeedEngine.init() will be run
In this method, it will check is_zero_init_model and dont_change_device
At this point the model has not been processed, there will be no ds_id in the parameters, and it is obvious that is_zero_init_model will definitely be False, so this is the problem. Next, self.module.to(self.device) will be executed, and then the entire model will be filled into the graphics card.
I am using three cards for training
Finally, when the training process starts, each graphics card has already stored the entire model, so it is not enough to support further training. So, the graphics memory overflows.
The above is my unprofessional and superficial understanding of the Deepspeed usage process. If there are any errors, please point them out. I also hope that you can answer my questions. Thank you very much.
from deepspeed.
Entirely same error when I fine tuning llama2. The is_zero_init_model is always False. And all params will load on each gpu without partition. @loadams @tjruwase @deepcharm
from deepspeed.
And I also confused why from_pretrained just load params to CPU memory and params load CPU to GPU memory in trainer.train.
from deepspeed.
Related Issues (20)
- [BUG] oneapi/ccl.hpp: No such file or directory. HOT 1
- [BUG]模型卡在trainer.train()一直不训练
- [BUG] Running llama2-7b step3 with tensor parallel and HE fails due to incompatible shapes
- RuntimeError: Error building extension 'cpu_adam', because /usr/bin/ld: can not find -lcurand,help! HOT 1
- Fail to use zero_init to construct llama2 with deepspeed zero3 and bnb!
- does DeepSpeed support AMSP (a new DP shard strategy)
- [BUG] 'Invalidate trace cache' with Seq2SeqTrainer+predict_with_generate+Zero3
- AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops. HOT 6
- How to set different learning rates for different parameters of LLMs
- Getting parameters of embeddings (safe_get_local_fp32_param)and setting the weight of embeddings (safe_set_local_fp32_param) does not work (bug?). HOT 1
- [BUG] DeepSpeed on pypi not compatible with latest `numpy` HOT 5
- [BUG] GPU memory leaking after deleting deepspeed engine HOT 2
- [BUG] Using and Building DeepSpeedCPUAdam HOT 23
- Bug Report: Issues Building DeepSpeed on Windows HOT 4
- [BUG] Logs full of FutureWarning when training with nightly PyTorch HOT 1
- [BUG] inference ValueError
- Running out of CPU memory. Dataset is loaded for each created process
- [ERROR] [launch.py:321:sigkill_handler] exits with return code = -11
- [BUG] Regression: 0.14.3 causes grad_norm to be zero HOT 2
- [BUG] Deepspeed does not seem to work using GPUDirect, should it? HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.