Git Product home page Git Product logo

pytorch-vqvae's Introduction

Reproducing Neural Discrete Representation Learning

Project Report link: final_project.pdf

Instructions

  1. To train the VQVAE with default arguments as discussed in the report, execute:
python vqvae.py --data-folder /tmp/miniimagenet --output-folder models/vqvae
  1. To train the PixelCNN prior on the latents, execute:
python pixelcnn_prior.py --data-folder /tmp/miniimagenet --model models/vqvae --output-folder models/pixelcnn_prior

Datasets Tested

Image

  1. MNIST
  2. FashionMNIST
  3. CIFAR10
  4. Mini-ImageNet

Video

  1. Atari 2600 - Boxing (OpenAI Gym) code

Reconstructions from VQ-VAE

Top 4 rows are Original Images. Bottom 4 rows are Reconstructions.

MNIST

png

Fashion MNIST

png

Class-conditional samples from VQVAE with PixelCNN prior on the latents

MNIST

png

Fashion MNIST

png

Comments

  1. We noticed that implementing our own VectorQuantization PyTorch function speeded-up training of VQ-VAE by nearly 3x. The slower, but simpler code is in this commit.
  2. We added some basic tests for the vector quantization functions (based on pytest). To run these tests
py.test . -vv

Authors

  1. Rithesh Kumar
  2. Tristan Deleu
  3. Evan Racah

pytorch-vqvae's People

Contributors

eracah avatar ritheshkumar95 avatar tristandeleu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pytorch-vqvae's Issues

Not able to train the PixelCNN

Hello, thanks for your work.
I am fighting some issues when training your models. I followed your Instructions.

First, I think you may have forgotten to add '--dataset' argument in both of your commands.
Then, I think you forget to import datasets from torchvision in pixelcnn_prior.py.

Eventually, running:

python3 pixelcnn_prior.py --data-folder /tmp/miniimagenet --output-folder models/vqvae --dataset mnist

results in:

AttributeError: 'MNIST' object has no attribute '_label_encoder'

And I have the same issue with CIFAR dataset.

One question about backward grad.

Hi, thanks for your implementation !
I'm now trying to implement the audio experiments of VQ-VAE. But when try to imitate your code, there is something I got confused:

  • I omit the nn.Embedding for VQEmbedding. My code is:
class VQEmbedding(nn.Module):
    def __init__(self):
        super().__init__()
        self.embedding = nn.Embedding(hp.K, hp.D)
        self.embedding.weight.data.uniform_(-1./hp.K, 1./hp.K)

    def forward(self, z_e_x):
        # z_e_x  - (B, D, T)
        # emb - (K, D)
        emb = self.embedding.weight
        dists = torch.pow(z_e_x.unsqueeze(1) - emb[None, :, :, None], 2)

        z_q_x = dists.min(1)[1].float()
        return z_q_x

So my z_q_x and z_e_x have the same dim, say (1, 256, 16000)(Batch, Dim, Length).

But when I train the model by computing the .grad:

optimizer.zero_grad()
x_recon, z_e_x, z_q_x = model(qt_var, speaker_var)
z_q_x.retain_grad()

loss_recon = cross_entropy_loss(x_recon.view(hp.BATCH_SIZE, hp.Q, -1), quantized_audio.view(hp.BATCH_SIZE, -1).long())

loss_recon.backward(retain_graph=True)

# Staright-through estimator
z_e_x.backward(z_q_x.grad, retain_graph=True)

Error happened:

RuntimeError: grad can be implicitly created only for scalar outputs

It means my z_q_x does not have grad. Actually because I dido some quantization work, my z_q_x and z_e_x are LongTensor, is this the reason for no grad ?

loss

i want know the code of loss:
log_px = nll.mean().item() - np.log(128) + kl_d.item()
in that code of loss , the 128 of 'np.log(128)' is value of Z_DIM ?

labels in the PixelCNN

Hi,
I have trained the VQVAE network on my own dataset comprise of 10,000 images of 64×64 pixels without any labels. In order to train PixelCNN network, I faked some labels like this:
label_set=torch.zeros((10000,1), dtype=torch.int64)
However, the shape of my faked labels seems not to fit in the code. In modules.py, there is this line out_v = self.gate(h_vert + h[:, :, None, None]) in GatedMaskedConv2d.forward, where h is the label. In this way, the shape of h_vert would be (batch, 2×dim, 16, 16), but the shape of h would be (batch, 1, 2×dim).
So can anyone tell me how to deal with the labels?
Thanks.

Bug in VQ function?

Nice implementation of the VQ-Straight through function!

However, when looking at the autograd graph there is an edge that is breaking the separated gradients for the reconstruction loss and the VQ loss. So the reconstruction loss is also updating the embedding, which should not happen. I tried to figure out why that happens. My understanding of pytorch isn't that thorough though. Do you might have an idea?

I marked the edge here.

Inquires for using the code

Dear ritheshkumar95,

We want to express our gratitude for your implementation of the pytorch VQ-VAE. Thanks to your work, we were able to develop and publish our own model, TVQ-VAE, which has been accepted for presentation at the AAAI 24 Conference (https://arxiv.org/abs/2312.11532).

We would like to request your permission to publish our implementation code, which was inspired by your work. Rest assured, we will properly cite your repository in our implementation as a reference.

Thank you for your contribution and support.

Best regards,

Example for raw audio

Hello, and thanks for the code! I want to replicate the audio results from the paper, but the DeepMind repo does not have a VQ-VAE example for audio (see google-deepmind/sonnet#141 ), and it seems quite different from the one for CIFAR:

We train a VQ-VAE where the encoder has 6 strided convolutions with stride 2 and window-size 4. This yields a latent space 64x smaller than the original waveform. The latents consist of one feature map and the discrete space is 512-dimensional.

Could you please include an example of using your code for audio?

Distance calculation

Can you please explain how you are computing the distance between the codebook and inputs? In functions.py, you are using this line:
distances = torch.addmm(codebook_sqr + inputs_sqr, inputs_flatten, codebook.t(), alpha=-2.0, beta=1.0)

I am unable to understand how this will give the euclidean distance between inputs and codebook.

How to generate images from the PixelCNN?

The PixelCNN learn to model the prior q(z) in the paper and the code. For any given classes/labels, PixelCNN should model their prior q(z), as shown in the code

def generate(self, label, shape=(8, 8), batch_size=64):
here. And the prior here is actually the index of some codes in the codebook.

I first generate the index for some given classes as the codes

def generate(self, label, shape=(8, 8), batch_size=64):
do, which is q(z)=GatedPixelCNN.generate(label).
After I got the index q(z), I try to generate the images based on the index using the decoder in VQVAE
def decode(self, latents):
, which is images=VectorQuantizedVAE.decode(q(z)).
However, these generated images look very unrealistic, unlike the reconstruction results.

Can we evaluate the PixelCNN based on the generated images? How can I get the realistic images based on the prior generated by PixelCNN?

Best wishes!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.