Comments (8)
While building the FAISS index using the recommended setting: "--faiss_index_type ivfpq --faiss_code_size 16", my machine with 8 x 80GB A100 runs out of CUDA memory (after converting about 3 million passages). How can I save memory during this step? At this stage, I think that the T5 model is not even loaded.
from atlas.
Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.
@minhluan1590 Did you solve using atlas on small GPU, also earlier comment you mentioned 8 * 80GB A100 machine, are you totally using 1600gb of GPU memory! Have you tried using smaller model and faiss pq technique to reduce the memory requirement, if so could you please share how much did it come down to?
It's the finetuning process that force us use too much memory. During this process, the full index is still needed to computed, and convert into FAISS later. Still trying to optimize the memory use of this model. I will share with you when I am finished.
from atlas.
Hi, @jamesoneill12 the 11B parameter reader model corresponds to the Atlas-xxl
size, you can select the Atlas-base
size and its reader model is only 220M parameters. Maybe this will do?
Alternatively, if you require to further reduce the memory requirements at inference time you can use FAISS compressed indexes. Please have a look at the blog to know how to run these for the NQ task: few-shot learning with retrieval augmented language models
from atlas.
@minhluan1590 what size of model are you using? I am assuming it's xl, xxl so you might need more than 8 GPUS to load all the embeddings. You could either use a smaller model or try --faiss_index_type pq --faiss_code_size 64
.
Btw, we can keep commenting here but for future issues please open a new issue rather than commenting on a closed one, thanks!
from atlas.
@minhluan1590 what size of model are you using? I am assuming it's xl, xxl so you might need more than 8 GPUS to load all the embeddings. You could either use a smaller model or try
--faiss_index_type pq --faiss_code_size 64
. Btw, we can keep commenting here but for future issues please open a new issue rather than commenting on a closed one, thanks!
Hi @mlomeli1, could you specify the minimum requirement to run Atlas, i have a 12 GB GPU would that be sufficient for fine tuning?
from atlas.
Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.
from atlas.
Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.
@minhluan1590 Did you solve using atlas on small GPU, also earlier comment you mentioned 8 * 80GB A100 machine, are you totally using 1600gb of GPU memory!
Have you tried using smaller model and faiss pq technique to reduce the memory requirement, if so could you please share how much did it come down to?
from atlas.
Hi @minhluan1590 and @prasad4fun , thanks for all the discussions.
As I said, different model sizes have different memory requirements so would be good to know which model size you intend to use. As a reference I've used the base
model in a V100 with 8 40 gb GPUS and xxl
with 8 A100 with 80 gb GPUs with a flat
index. It is true that we require to load the full embeddings for fine-tuning so if you happen to have multiple nodes with 4,8 GPUs each (say) you can then load less embedding per process so the memory requirements per GPU are less. (see the diagram below)
In the Atlas blog post, I've added a table with memory requirements with different PQ compression sizes, hope that helps.
from atlas.
Related Issues (17)
- what is srun? HOT 1
- Running Atlas on small GPU's. HOT 2
- Poor quality of outputs from large model on 1/10th of wikipedia
- Access to the models [fully and 64-shot] fine-tuned on different datasets HOT 1
- Possibility of releasing closed-book Atlas models? HOT 1
- Not able to download the NQ data HOT 1
- How do you conduct distributed training? HOT 1
- Save and load compressed index HOT 2
- Example training script for the KILT task HOT 2
- [Fix] Retriever tokenization function in atlas.py needs correction HOT 1
- How can run this code on a normal personal machine? We don't have slurm configured, can we run it by simply modifying the run script? HOT 1
- Why is the index size so big?
- What's the difference between train.py and finetune_qa.py
- "AttributeError: module 'torch.optim.adamw' has no attribute 'F'" in AdamWFP32Copy.py HOT 2
- RuntimeError: ProcessGroupNCCL does not support gather HOT 1
- "RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]" during reproducing mlm pre-training HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from atlas.