Comments (8)
Hi @ldh127 - can you please be more specific, share more about what you are trying to do and what errors you are hitting?
from deepspeed.
@ldh127, does the following help?
https://deepspeed.readthedocs.io/en/latest/model-checkpointing.html#zero-checkpoint-fp32-weights-recovery
from deepspeed.
from deepspeed.
Hi @ldh127 - can you please be more specific, share more about what you are trying to do and what errors you are hitting?
yes ,i use transformers trainer to call deepspeed , it save the deepspeed checkpoint which contains multi gpu model and optim file , i want just one file optim.pt file to choosing sft data , my code can only load one global optim.pt , but deepspeed checkpoint get multi part optim and model file , how can i. merge multi optim file to one global file ?
from deepspeed.
yes,but i think this code is for ds2universe model param ,not for merging multi optim file into to one file , it can process merge deepspeed multi gpu optim file into one pytorch optim.pt file? ldh @.***
…
@ldh127, why do you say the link is related to ds2universal? Did you try it? Can you clarify how your scenario is different from the use case below? Thanks!
![image](https://private-user-images.githubusercontent.com/4271600/326058623-96a97e47-6c27-4693-9875-de9d89117a4a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTYxODkwMDEsIm5iZiI6MTcxNjE4ODcwMSwicGF0aCI6Ii80MjcxNjAwLzMyNjA1ODYyMy05NmE5N2U0Ny02YzI3LTQ2OTMtOTg3NS1kZTlkODkxMTdhNGEucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDUyMCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA1MjBUMDcwNTAxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YTNhMWQ5ZjQ3NjA3ZDY3MGEwMzhjYTYyYTZlOGI3NDJmZThiY2EwYzM0YTQyYzllOTZjMDlmZDdhY2Q4ZTJlYiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.nLUgBI17b54KqiBGP2sfblC3RqyRi_9IwK1fVaHtC08)
from deepspeed.
yes,but i think this code is for ds2universe model param ,not for merging multi optim file into to one file , it can process merge deepspeed multi gpu optim file into one pytorch optim.pt file? ldh @.***
…@ldh127, why do you say the link is related to ds2universal? Did you try it? Can you clarify how your scenario is different from the use case below? Thanks!
![]()
yes , i try this code , finally i surely get only one .pth file, but you can see my details, i
this is my deepspeed checkpoint file, i use your code to read this folder ,and finally it merge and save only one file , i use this code ,you can see
,and i get the file like this ,
, you can see that i print the state_dict name, it is like base_model.model.model.layers.38.self_attn.q_proj.lora_A.default.weight
base_model.model.model.layers.38.self_attn.q_proj.lora_B.default.weight
base_model.model.model.layers.38.self_attn.k_proj.lora_A.default.weight
base_model.model.model.layers.38.self_attn.k_proj.lora_B.default.weight
, but it seems the models name ,not the optim file name ?
from deepspeed.
yes,but i think this code is for ds2universe model param ,not for merging multi optim file into to one file , it can process merge deepspeed multi gpu optim file into one pytorch optim.pt file? ldh @.***
…@ldh127, why do you say the link is related to ds2universal? Did you try it? Can you clarify how your scenario is different from the use case below? Thanks!
![]()
you can see the uppon picture , if the finally file which named demo_state_dict.pth contains optim param ,but how can i get the optim state_dict ? if it is the merged optim file , it seems i can use state_dict["optim_state"] like this way to get the only one optim dict ,but it has no optim_state key in the dict , so i donot konw what error in my operate steps
from deepspeed.
yes,but i think this code is for ds2universe model param ,not for merging multi optim file into to one file , it can process merge deepspeed multi gpu optim file into one pytorch optim.pt file? ldh @.***
…@ldh127, why do you say the link is related to ds2universal? Did you try it? Can you clarify how your scenario is different from the use case below? Thanks!
![]()
i also read the code in this url: https://github.com/microsoft/DeepSpeed/blob/4c15ad9f8d51a1950842c69bbbc9d93c73afbcfc/deepspeed/utils/zero_to_fp32.py , but i do not know if i need to update what code , can you give me more detail help? thanks , need some detail
from deepspeed.
Related Issues (20)
- [BUG] Uneven work distribution caused by get_shard_size changes
- [BUG] When initializing model_engine, if an mpu is specified, it can lead to an excessively large checkpoint size, and the checkpoint may not be convertible through the `zero_to_fp32.py` script.
- [BUG] Uneven work distribution caused by get_shard_size changes HOT 9
- [REQUEST] pynvml package seems to be deprecated in favor of nvidia-ml-py HOT 1
- [BUG] BertLMHeadModel.from_pretrained hangs when using zero-3 / zero3-offload HOT 1
- [BUG]Why ZeroOneAdam is easy to OOM compared to Adam optimizer?
- [BUG] Why the results were inconsistent in two identical tests with config zero2 + overlap_comm HOT 4
- [BUG] Zero3: Post backward hook is not triggered for submodules whose inputs have .required_grad=False HOT 1
- Get a error when use deepspeed training with torch.compile HOT 1
- [BUG]AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'
- [BUG] fp_quantizer is not correctly built when non-jit installation HOT 2
- [BUG]CUDA error in pipeline parallel HOT 3
- [BUG] FlopsProfiler upsample flops compute bug
- [BUG] Version >0.14.0 leads to `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!` HOT 15
- [BUG] Zero3: Gather the params for inference(huggingface_language_model.generate) in the end of 1 epoch and re-partition it for next epoch training HOT 9
- [REQUEST] How to install only the torch CPU version when I execute `pip install deepspeed`.
- [REQUEST] DeepSpeed-Ulysses with the Pure Deepspeed Zero
- [BUG] Error with nn.transformers layer size with Zero stage 3 HOT 1
- [Question]how to run the mixtral inference in multi-node?
- [BUG] deepspeed overlap_comm data race HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.