Comments (1)
-
Inference methods
a. deepspeed.initialize enables zero-inference which uses offloading and zero3 sharding and is designed for throughput-oriented or low-budget scenarios. See this and this.
b. deepspeed.init_inference enables a highly optimized inference computation based on CUDA kernels and automatic tensor-parallelism.
c. mii is an e2e inference server solution that integrates our latest fast inference engine called FastGen. Our future efforts in this space will target mii and FastGen. -
All the methods are memory friendly in different ways: zero-inference (offloading, zero3 sharding), and FastGen (tensor-parallelism).
-
70b llama inference: The two options you list fall under zero-inference and represent different tradeoffs: zero3 sharding (2xa100) is faster but costs more than offload(1xa100).
from deepspeed.
Related Issues (20)
- [REQUEST] Add documentation on how to run fast inference of `transformers` models with ZeRO-3
- [REQUEST] Any arguments for disabling saving global steps?
- [BUG] Jamba (Mamba+MoE) + ZeRO3 + LoRA training hangs
- [BUG] 3 GPUs is not as good as expectation compare with 2 GPUs; NV vs AMD performace; flash attention not support for AMD GPUs
- [BUG] Unexpected High Memory Usage (OOM) when finetuning Llama2-7B
- [REQUEST] Enable both CPU and NVMe for optimizer
- [BUG] Mismatch between dtype settings in model and ds_config results in NaN loss
- [REQUEST] Launcher mode with SSH bypass HOT 2
- FileNotFoundError: [Errno 2] No such file or directory: ':/usr/local/cuda/bin/nvcc' HOT 5
- [BUG] Uneven work distribution caused by get_shard_size changes
- [BUG] When initializing model_engine, if an mpu is specified, it can lead to an excessively large checkpoint size, and the checkpoint may not be convertible through the `zero_to_fp32.py` script.
- [BUG] Uneven work distribution caused by get_shard_size changes HOT 9
- [REQUEST] pynvml package seems to be deprecated in favor of nvidia-ml-py HOT 1
- [BUG] BertLMHeadModel.from_pretrained hangs when using zero-3 / zero3-offload HOT 1
- [BUG]Why ZeroOneAdam is easy to OOM compared to Adam optimizer?
- [BUG] Why the results were inconsistent in two identical tests with config zero2 + overlap_comm HOT 4
- [BUG] Zero3: Post backward hook is not triggered for submodules whose inputs have .required_grad=False HOT 1
- Get a error when use deepspeed training with torch.compile HOT 1
- [BUG]AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'
- [BUG] fp_quantizer is not correctly built when non-jit installation HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.