Git Product home page Git Product logo

Comments (9)

hzphzp avatar hzphzp commented on July 30, 2024 4

Same issue, it hangs when training with multi-gpus.

from magvit2-pytorch.

lucidrains avatar lucidrains commented on July 30, 2024 1

@jpfeil could you try commenting out these two lines and see if it gets past the first step?

from magvit2-pytorch.

lucidrains avatar lucidrains commented on July 30, 2024

@jpfeil ah, i don't know from first glance at the code, and don't have access to multi-gpu at the moment

from magvit2-pytorch.

lucidrains avatar lucidrains commented on July 30, 2024

@jpfeil did you resolve the other two issues that are open on single gpu?

from magvit2-pytorch.

jpfeil avatar jpfeil commented on July 30, 2024

@lucidrains I couldn't get multi-gpu to work, so I'm moving forward with single-gpu. I tried running imagenet, but I get the adaptive adversarial weight going to nan which causes the loss to become nan:

LossBreakdown(recon_loss=tensor(0.0777, device='cuda:0', grad_fn=), lfq_aux_loss=tensor(0.0022, device='cuda:0', grad_fn=), quantizer_loss_breakdown=LossBreakdown(per_sample_entropy=tensor(0.0003, device='cuda:0', grad_fn=), batch_entropy=tensor(0.0003, device='cuda:0', grad_fn=), commitment=tensor(0.0024, device='cuda:0', grad_fn=)), perceptual_loss=tensor(0.2947, device='cuda:0', grad_fn=), adversarial_gen_loss=tensor(0.0186, device='cuda:0', grad_fn=), adaptive_adversarial_weight=tensor(nan, device='cuda:0'), multiscale_gen_losses=[], multiscale_gen_adaptive_weights=[])

Is there a check we can add here that will allow the training to continue?

from magvit2-pytorch.

lucidrains avatar lucidrains commented on July 30, 2024

@jpfeil ahh, hard to know without doing training myself and ironing out the issues

try 0.1.43, and if that doesn't work, i'll get around to it this weekend

from magvit2-pytorch.

ziyannchen avatar ziyannchen commented on July 30, 2024

Caught the same problem here. Multi-GPU training would stuck in step 1 while single-GPU training works fine.
I did some debugging. The first step always works fine until the second step it stuck in the last self.accelerator.backward in the accumulated grad ops. Specifically, in trainer.py

def train_step(self, dl_iter):
    for grad_accum_step in range(self.grad_accum_every):
          ....
          is_last = grad_accum_step == (self.grad_accum_every - 1)
          context = partial(self.accelerator.no_sync, self.model) if not is_last else nullcontext

          data, *_ = next(dl_iter)
          self.print(f'accum step {grad_accum_step} {data} {data.shape}')

          with self.accelerator.autocast(), context():
              loss, loss_breakdown = self.model(
                  data,
                  return_loss = True,
                  adversarial_loss_weight = adversarial_loss_weight,
                  multiscale_adversarial_loss_weight = multiscale_adversarial_loss_weight
              )
              self.print(f'l355 loss {loss.shape} {loss}')
              self.accelerator.backward(loss / self.grad_accum_every) # stuck here in the last accum step
              self.print('l357 backward') # This will never print until timeout (only the last accum iter in the second step)

Also I found that there is a warning at the same time(last accum backward step) from the first step, reporting as

UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed.  This is not an error but may impair performance.
grad.sizes() = [32, 64, 1, 1], strides() = [64, 1, 64, 64]
bucket_view.sizes() = [32, 64, 1, 1], strides() = [64, 1, 1, 1]

I'm not sure if they are related problems.

from magvit2-pytorch.

ziyannchen avatar ziyannchen commented on July 30, 2024

I've done some debugging. I believed that some reasons caused this hanging, such as my linux kernel is too old that it can't support latest version of torch and accelerate, or unsupported mixed-precision.

However, it turns out my problem is actually highly related to https://discuss.pytorch.org/t/torch-distributed-barrier-hangs-in-ddp/114522/7.

It is the validation in the main process caused the stuck.
Change the following line of class VideoTokenizerTrainer in trainer.py

def valid_step(...):
    # self.model(...)
    # change the upper line to use local model instead of DDP model
    self.model.module(...)

This has solved my multi-GPU training stuck problem.

from magvit2-pytorch.

lucidrains avatar lucidrains commented on July 30, 2024

@ziyannchen hey, thanks for the debug

do you want to see if 0.4.3 works without your modification?

from magvit2-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.