Git Product home page Git Product logo

Comments (8)

minhluan1590 avatar minhluan1590 commented on July 25, 2024 1

While building the FAISS index using the recommended setting: "--faiss_index_type ivfpq --faiss_code_size 16", my machine with 8 x 80GB A100 runs out of CUDA memory (after converting about 3 million passages). How can I save memory during this step? At this stage, I think that the T5 model is not even loaded.

from atlas.

minhluan1590 avatar minhluan1590 commented on July 25, 2024 1

Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.

@minhluan1590 Did you solve using atlas on small GPU, also earlier comment you mentioned 8 * 80GB A100 machine, are you totally using 1600gb of GPU memory! Have you tried using smaller model and faiss pq technique to reduce the memory requirement, if so could you please share how much did it come down to?

It's the finetuning process that force us use too much memory. During this process, the full index is still needed to computed, and convert into FAISS later. Still trying to optimize the memory use of this model. I will share with you when I am finished.

from atlas.

mlomeli1 avatar mlomeli1 commented on July 25, 2024

Hi, @jamesoneill12 the 11B parameter reader model corresponds to the Atlas-xxl size, you can select the Atlas-base size and its reader model is only 220M parameters. Maybe this will do?
Alternatively, if you require to further reduce the memory requirements at inference time you can use FAISS compressed indexes. Please have a look at the blog to know how to run these for the NQ task: few-shot learning with retrieval augmented language models

from atlas.

mlomeli1 avatar mlomeli1 commented on July 25, 2024

@minhluan1590 what size of model are you using? I am assuming it's xl, xxl so you might need more than 8 GPUS to load all the embeddings. You could either use a smaller model or try --faiss_index_type pq --faiss_code_size 64.
Btw, we can keep commenting here but for future issues please open a new issue rather than commenting on a closed one, thanks!

from atlas.

prasad4fun avatar prasad4fun commented on July 25, 2024

@minhluan1590 what size of model are you using? I am assuming it's xl, xxl so you might need more than 8 GPUS to load all the embeddings. You could either use a smaller model or try --faiss_index_type pq --faiss_code_size 64. Btw, we can keep commenting here but for future issues please open a new issue rather than commenting on a closed one, thanks!

Hi @mlomeli1, could you specify the minimum requirement to run Atlas, i have a 12 GB GPU would that be sufficient for fine tuning?

from atlas.

minhluan1590 avatar minhluan1590 commented on July 25, 2024

Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.

from atlas.

prasad4fun avatar prasad4fun commented on July 25, 2024

Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.

@minhluan1590 Did you solve using atlas on small GPU, also earlier comment you mentioned 8 * 80GB A100 machine, are you totally using 1600gb of GPU memory!
Have you tried using smaller model and faiss pq technique to reduce the memory requirement, if so could you please share how much did it come down to?

from atlas.

mlomeli1 avatar mlomeli1 commented on July 25, 2024

Hi @minhluan1590 and @prasad4fun , thanks for all the discussions.

As I said, different model sizes have different memory requirements so would be good to know which model size you intend to use. As a reference I've used the base model in a V100 with 8 40 gb GPUS and xxl with 8 A100 with 80 gb GPUs with a flat index. It is true that we require to load the full embeddings for fine-tuning so if you happen to have multiple nodes with 4,8 GPUs each (say) you can then load less embedding per process so the memory requirements per GPU are less. (see the diagram below)
In the Atlas blog post, I've added a table with memory requirements with different PQ compression sizes, hope that helps.

atlas_distributed

from atlas.

Related Issues (17)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.