Comments (5)
The deadlock is caused when we detect that we are not making any progress on any of the generation tasks. This can happen for a few reasons, including lots of concurrent generation requests, very long sequences, or limited GPU memory. Our current solution for this will hurt performance if you are seeing it often. How many requests are you sending to the server at once time?
Also, I believe @tohtana is working on an improved solution to this problem.
from deepspeed-mii.
I am sending a few hundred requests within one batch.
from deepspeed-mii.
I am sending a few hundred requests within one batch.
If these requests are generating lots of tokens, then sending this many at once will definitely cause the deadlock situation. If you can send the requests in smaller batches, that would avoid the problem. However, I will let @tohtana comment on any upcoming changes that will allow users to send large batches of requests at once!
from deepspeed-mii.
Hi @flexwang,
DeepSpeed-FastGet (MII) allocates KV cache for all requests that are processed in a batch. To avoid this warning, a simple workaround is to reduce the number of requests in a batch. In your case, I recommend starting with 10-20 requests, though the optimal number heavily depends on the lengths of the prompts and the generated tokens. If you don't encounter the warning message, you may be able to further enhance efficiency by gradually increasing the number of requests.
We understand that tuning the number of requests isn't always straightforward, and we're considering either automating this adjustment or at least making it easier in future versions.
from deepspeed-mii.
vLLM implements swapping (Chapter 4.5 of vLLM paper) as an alternative to recomputing if no space could be allocated for KV cache of new tokens. Would MII implement KV cache swapping?
from deepspeed-mii.
Related Issues (20)
- Deployment in kubernetes
- Reproduced readme results HOT 8
- Can MII support quanted Llama2 of AWQ?
- Problem while running facebook/opt-125m with MII HOT 2
- When running mii.serv, it keeps print waiting for server to start. HOT 6
- restful_api_host did not use in anywhere HOT 1
- one of mii.client() Options, ignore_eos doesn't work
- for loop calling Non Persistent Pipeline will cause Deadlock HOT 1
- I wonder if we can use batch inference and offload in mii pipeline ? HOT 2
- How to get the logit tensor of generated text? HOT 4
- pydantic.errors.PydanticUserError HOT 1
- Mistral 8*7B Out of memory HOT 1
- deep speed parallel erro HOT 1
- Error: "Only able to place X replicas, but Y replicas were requested" HOT 1
- RuntimeError: The server socket has failed to listen on any local network address. The server socket has failed to bind to [::]:29700 (errno: 98 - Address already in use). HOT 1
- `ValueError: channels must be divisible by 8` when new special tokens are added HOT 3
- import mii not working HOT 5
- How to eliminate deadlock problem? HOT 1
- support for mixtral family ? HOT 9
- did "mii.pipeline" support float16? HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed-mii.