Git Product home page Git Product logo

Comments (6)

sarchak avatar sarchak commented on August 23, 2024

Tried with the least possible configs.

micro_batch_size = 1  # set to 2 because this is fit into 12GB Vram
gradient_accumulation_iters = batch_size // micro_batch_size```

from litgpt.

scampion avatar scampion commented on August 23, 2024

Same here
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 144.00 MiB. GPU 0 has a total capacty of 22.19 GiB of which 98.50 MiB is free. Including non-PyTorch memory, this process has 22.09 GiB memory in use. Of the allocated memory 21.15 GiB is allocated by PyTorch, and 649.42 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Device 0 [NVIDIA A10G] PCIe GEN 1@ 8x RX: 0.000 kB/s TX: 0.000 kB/s
GPU 0MHz    MEM 405MHz  TEMP  29°C FAN   0% POW  16 / 300 W
GPU[                          0%] MEM[                  0.3G/24.1G]

from litgpt.

canamika27 avatar canamika27 commented on August 23, 2024

I also tried finetuning falcon-7b with micro-bs = 1 & observed the same. The vram consumption is high. one thing we can do we can keep the max_seq_length small maybe around 512.. because by default its using model.config.block_size which is 2048 . So while preparing the data in scripts/prepare_alpaca.py or custom dataset preparation change the max_seq_length & try

NVIDIA-SMI 515.43.04 Driver Version: 515.43.04 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100 80G... On | 00000000:65:00.0 Off | 0 |
| N/A 58C P0 176W / 300W | 53165MiB / 81920MiB | 60% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A100 80G... On | 00000000:CA:00.0 Off | 0 |
| N/A 57C P0 202W / 300W | 45089MiB / 81920MiB | 75% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2301 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 3993892 C python 53158MiB |
| 1 N/A N/A 2301 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 3993958 C ...nvs/lit-parrot/bin/python 45082MiB |
+-----------------------------------------------------------------------------+

from litgpt.

canamika27 avatar canamika27 commented on August 23, 2024

Even the finetune results not great as compared to finetune with lora / qlora.. if someone has observed the same ?

from litgpt.

carmocca avatar carmocca commented on August 23, 2024

Hi! Here's the memory usage using current master (commit b29ca09) with falcon-7b and always passing --precision 16-true

  • finetune/adapter.py: 32.69 GB (micro_batch_size=4), 17.37 GB (micro_batch_size=1)
  • finetune/adapter_v2.py: 41.75 GB (micro_batch_size=4), 18.53 GB (micro_batch_size=1)
  • finetune/lora.py: 33.74 GB (micro_batch_size=4), 17.5 GB (micro_batch_size=1)

As @canamika27 suggested, if you force a smaller max_seq_length, that will take less memory. Also note that even though 16-true precision takes less memory, its training is unstable (#140)

from litgpt.

carmocca avatar carmocca commented on August 23, 2024

I just merged some improvements to reduce the peak memory usage. Please pull the latest changes.

I'll also be adding a guide for dealing with OOMs with #182. Hope this helps

from litgpt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.