Comments (15)
@tjruwase please try this example (https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune.sh) with zero3-offload. Thanks.
from deepspeed.
I found a workaround. Just manually patching your runtime/zero/stage3.py
according to PR 5461 will fix everything.
from deepspeed.
Hi @pacman100 - thanks for making this issue here to better track it. Does this also happen with the latest changes in the master branch?
from deepspeed.
Hi @pacman100 - thanks for making this issue here to better track it. Does this also happen with the latest changes in the master branch?
I confirm that test passes when using the master branch.It would be great to have a patch release if possible.
from deepspeed.
The problem will exist in Zero3-offload
. It seems the problem lies in the partition parameter's part in Zero3 if the model has multiple parallel modules or frozen parameters, the offload procedure cannot load correct parameter mapping. Please take a look and fix it. Thanks.
from deepspeed.
@bug-fixed, are you able to share repro for zero3-offload
case? Thanks!
from deepspeed.
@bug-fixed that repro does not work. Please provide a more precise single script reproduction.
from deepspeed.
@jomayeri , thanks for the response. The file needed in the script can be downloaded in here: https://huggingface.co/liuhaotian/llava-v1.5-mlp2x-336px-pretrain-vicuna-13b-v1.5/tree/main.
Unfortunately, I think it's difficult for me to prepare a more concise script, apologize for this. I checked the model with only Llama-3, the Zero3-offload works fine. But when I tested it using the script above, i.e., equipped with a vision transformer and another simple linear module, the problem occurred. I guess many factors may lead to the problem. Please note that the conclusion in my previous comment might be wrong because of my very limited knowledge in DeepSpeed. I have the following partial error information for your check:
**deepspeed_aio: fstat for read failed on /lscratch/26730337/offload/param/zero_stage_3/bfloat16params/rank0/291_param.tensor.swp error = 2**
[cn1112:0]: File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/engine.py", line 1582, in _configure_zero_optimizer [25/1960]
[cn1112:0]: optimizer = DeepSpeedZeroOptimizer_Stage3(
[cn1112:0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[cn1112:0]: File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 362, in __init__
[cn1112:0]: self._setup_for_real_optimizer()
[cn1112:0]: File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 472, in _setup_for_real_optimizer
[cn1112:0]: self._create_fp32_partitions()
[cn1112:0]: File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 845, in _create_fp32_partitions
[cn1112:0]: self._swap_in_sub_group_to_flat_buffer(unpinned_fp32_buffer, i)
[cn1112:0]: File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 762, in _swap_in_sub_group_to_flat_buffer
[cn1112:0]: param.nvme_swapper.swap_in([param], async_op=False)
[cn1112:0]: File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 306, in swap_in
[cn1112:0]: swap_in_tensors(self.aio_read_handle, swap_in_buffers, swap_in_paths)
[cn1112:0]: File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/swap_tensor/utils.py", line 21, in swap_in_tensors
[cn1112:0]: assert (swap_handle.async_pread(buffer, path) == 0)
It shows that some parameter file was not saved in the storage. I guess one possible reason is that it failed to build the correct parameter mapping.
My Zero3-offload is:
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/lscratch/26730337/offload/optimizer",
"pin_memory": true,
"ratio": 0.2,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/lscratch/26730337/offload/param",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e9,
"max_in_cpu": 1e9
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": 0,
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 0,
"gather_16bit_weights_on_model_save": true
},
from deepspeed.
@tjruwase I have updated my comment, please kindly check it. Thanks.
from deepspeed.
@bug-fixed Does the same thing happen when you offload to CPU?
from deepspeed.
@jomayeri Just encountered this problem. I use CPU offloading, and here is my deepspeed config:
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
The specific traceback is
ret_val = func(*args, **kwargs) File ".../site-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads
^^^^^^^^^^^^^^^^^^^^^
File ".../site-packages/deepspeed/runtime/zero/stage3.py", line 2047, in step
self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cpu!
self.unscale_and_clip_grads(sub_group_id, scaled_global_grad_norm)
File ".../site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File ".../site-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads
self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
from deepspeed.
I found a workaround. Just manually patching your
runtime/zero/stage3.py
according to PR 5461 will fix everything.
@lihe07 - so using the latest deepspeed built from source works? You don't hit any issues with Zero stage 3?
from deepspeed.
so using the latest deepspeed built from source works? You don't hit any issues with Zero stage 3?
@loadams I directly modified the source in my deepspeed 0.14.2 installation, and ZeRO stage 3 is working fluently now.
The status of the latest code should depend on Pull 5493, as it re-introduced the buggy optimization.
from deepspeed.
@bug-fixed Does the same thing happen when you offload to CPU?
@jomayeri The machine I'm working on has very limited memory and is shared with others. it is difficult for me to test the "device": "cpu",
option.
from deepspeed.
Both this issue and #5422 are referring to this line in Zero3 which appears to be reverted to the correct state in master. If any user is having similar issues (@bug-fixed) please open a separate thread.
from deepspeed.
Related Issues (20)
- Install errors on Windows HOT 5
- [HELP] How to safely switch trainable parameters in ZeRO-3 stage? HOT 2
- Does deepspeed support aarch64? HOT 6
- [BUG] tortoise_tts.py fails on deepspeed/pydantic error HOT 1
- [BUG] 1 line logic issue: flipped sign/direction in `_partition_param_sec` of `partition_parameters.py`? HOT 1
- [BUG] RuntimeError encountered when generating tokens from a Meta-Llama-3-8B-Instruct model initialized with 4-bit or 8-bit quantization HOT 2
- Why doesn't deepspeed stage 3 allow a batch size of 1 with multiple GPUs?
- [BUG] File not found in autotuner cache in multi-node setting on SLURM HOT 1
- Inference with the MoE based GPT model trained by ds_pretrain_gpt_345M_MoE128.sh [BUG]
- RuntimeError: still have inflight params[BUG] HOT 1
- Install issue with setuptools 70 HOT 2
- [BUG] oneapi/ccl.hpp: No such file or directory. HOT 1
- [BUG]模型卡在trainer.train()一直不训练
- [BUG] Running llama2-7b step3 with tensor parallel and HE fails due to incompatible shapes
- RuntimeError: Error building extension 'cpu_adam', because /usr/bin/ld: can not find -lcurand,help! HOT 1
- Fail to use zero_init to construct llama2 with deepspeed zero3 and bnb!
- does DeepSpeed support AMSP (a new DP shard strategy)
- [BUG] 'Invalidate trace cache' with Seq2SeqTrainer+predict_with_generate+Zero3
- AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops. HOT 6
- How to set different learning rates for different parameters of LLMs
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.