Git Product home page Git Product logo

sa-vae's Introduction

Semi-Amortized Variational Autoencoders

Code for the paper:
Semi-Amortized Variational Autoencoders
Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, Alexander Rush

Dependencies

The code was tested in python 3.6 and pytorch 0.2. We also require the h5py package.

Data

The raw datasets can be downloaded from here.

Text experiments use the Yahoo dataset from Yang et al. 2017, which is itself derived from Zhang et al. 2015.

Image experiments use the OMNIGLOT dataset Lake et al. 2015 with preprocessing from Burda et al. 2015.

Please cite the original papers when using the data.

Text

After downloading the data, run

python preprocess_text.py --trainfile data/yahoo/train.txt --valfile data/yahoo/val.txt
--testfile data/yahoo/test.txt --outputfile data/yahoo/yahoo

This will create the *.hdf5 files (data tensors) to be used by the model, as well as the *.dict file which contains the word-to-integer mapping for each word.

The basic model command is

python train_text.py --train_file data/yahoo/yahoo-train.hdf5 --val_file data/yahoo/yahoo-val.hdf5
--gpu 1 --checkpoint_path model-path

where model-path is the path to save the best model and the *.hdf5 files are obtained from running preprocess_text.py. You can specify which GPU to use by changing the input to the --gpu command.

To train the various models, add the following:

  • Autoregressive (i.e. language model): --model autoreg
  • VAE: --model vae
  • SVI: --model svi --svi_steps 20 --train_n2n 0
  • VAE+SVI: --model savae --svi_steps 20 --train_n2n 0 --train_kl 0
  • VAE+SVI+KL: --model savae --svi_steps 20 --train_n2n 0 --train_kl 1
  • SA-VAE: --model savae --svi_steps 20 --train_n2n 1

Number of SVI steps can be changed with the --svi_steps command.

To evaluate, run

python train_text.py --train_from model-path --test_file data/yahoo/yahoo-test.hdf5 --test 1 --gpu 1

Make sure the append the relevant model configuration at test time too.

Images

After downloading the data, run

python preprocess_img.py --raw_file data/omniglot/chardata.mat --output data/omniglot/omniglot.pt

To train, the basic command is

python train_img.py --data_file data/omniglot/omniglot.pt --gpu 1 --checkpoint_path model-path

To train the various models, add the following:

  • Autoregressive (i.e. Gated PixelCNN): --model autoreg
  • VAE: --model vae
  • SVI: --model svi --svi_steps 20
  • VAE+SVI: --model savae --svi_steps 20 --train_n2n 0 --train_kl 0
  • VAE+SVI+KL: --model savae --svi_steps 20 --train_n2n 0 --train_kl 1
  • SA-VAE: --model savae --svi_steps 20 --train_n2n 1

To evaluate, run

python train_img.py --train_from model-path --test 1 --gpu 1

Make sure the append the relevant model configuration at test time too.

Acknowledgements

Some of our code is based on VAE with a VampPrior.

License

MIT

sa-vae's People

Contributors

yoonkim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sa-vae's Issues

Incorrect citation

In page 2 there is a ciation that I believe to be incorrect
"We also find that under our framework, we are
able to utilize a powerful generative model without experiencing
the “posterior-collapse” phenomenon often observed
in VAEs, wherein the variational posterior collapses to the
prior and the generative model ignores the latent variable
(Bowman et al., 2016; Chen et al., 2017; Zhao et al., 2017)."

and the Bowman et al., 2016 entry in the reference is:
Bowman, Samuel R., Vilnis, Luke, Vinyal, Oriol, Dai, Andrew M.,
Jozefowicz, Rafal, and Bengio, Samy. Categorical Reparameterization
with Gumbel-Softmax. In Proceedings of CoNLL,
2016

I cannot find the paper online. However, I can find two papers:
Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. "Generating sentences from a continuous space." arXiv preprint arXiv:1511.06349 (2015).

and
Jang, Eric, Shixiang Gu, and Ben Poole. "Categorical reparameterization with gumbel-softmax." arXiv preprint arXiv:1611.01144 (2016).

RNN backward function in evaluation error

trying to run sample code but getting below error.
appreciate your help

my environment is
Ubuntu 16.04.6 LTS
Pytorch 1.2.0
GPU Titan-V
Cuda version 10.1

Traceback (most recent call last):
File "train_text.py", line 357, in
main(args)
File "train_text.py", line 253, in main
val_nll = eval(val_data, model, meta_optimizer)
File "train_text.py", line 327, in eval
var_params_svi = meta_optimizer.forward([mean_svi, logvar_svi], sents)
File "/home/ubuntu/Program/sa-vae/optim_n2n.py", line 43, in forward
return self.forward_mom(input, y, verbose)
File "/home/ubuntu/Program/sa-vae/optim_n2n.py", line 90, in forward_mom
all_grads_k = torch.autograd.grad(loss, all_input_params, retain_graph = True)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/autograd/init.py", line 149, in grad
inputs, allow_unused)
RuntimeError: cudnn RNN backward can only be called in training mode

Teacher forcing in eval?

Hi Yoon Kim.
Looking at the eval function in train_text.py
Doesn't this mean teacher forcing is used at test time as well?

I believe I'm missing something.
Thanks!

About implementation

Hi. I am trying to run the source code based on Pytorch 0.4.1. I encountered a problem:

cudnn RNN backward can only be called in training mode

Did you encounter this problem? Pytorch version issue?

How can we generate new samples with the model?

Hi Yoon Kim,

In your paper, I can see how SVI helps to reduce to amortization gap but it requires the data x to be available. When VAE is used as a generative model, x is not available so SVI cannot be performed to improve the variational parameters. If we just sample lambda from the prior and feed it to the decoder, my feeling is that the result will be worse than that of VAE because the variational distribution is more different from the prior than before.
Does that mean SA-VAE cannot be used as a generative model?

I am also not so sure about the evaluation procedure for text data. Is the input sentence itself served as the target for SVI? Re-estimating the variational parameters at test time will make the model to assign higher probability to every sentence, including sentences which don't make sense. I think a good language model should assign higher probabilities to good sentences and lower probability to bad sentences. I am afraid that SA-VAE is assigning higher probability to every sentence, regardless of the quality.

Can you please help me to address these concerns?
Thank you :).

Run VAE model returns error

Hello,
I downloaded the dataset and then run

python preprocess_text.py --trainfile data/yahoo/train.txt --valfile data/yahoo/val.txt --testfile data/yahoo/test.txt --outputfile data/yahoo/yahoothen when I want to run python train_text.py --train_file data/yahoo/yahoo-train.hdf5 --val_file data/yahoo/yahoo-val.hdf5
--gpu 1 --checkpoint_path vae_model --model vae`
I am getting following error:

python train_text.py --train_file data/yahoo/yahoo-train.hdf5 --val_file data/yahoo/yahoo-val.hdf5 --gpu 1 --model vae --checkpoint_path vae_model
Train data: 3211 batches
Val data: 398 batches
Word vocab size: 20001
model architecture
RNNVAE(
(enc_word_vecs): Embedding(20001, 512)
(latent_linear_mean): Linear(in_features=1024, out_features=32, bias=True)
(latent_linear_logvar): Linear(in_features=1024, out_features=32, bias=True)
(enc_rnn): LSTM(512, 1024, batch_first=True)
(enc): ModuleList(
(0): Embedding(20001, 512)
(1): LSTM(512, 1024, batch_first=True)
(2): Linear(in_features=1024, out_features=32, bias=True)
(3): Linear(in_features=1024, out_features=32, bias=True)
)
(dec_word_vecs): Embedding(20001, 512)
(dec_rnn): LSTM(544, 1024, batch_first=True)
(dec_linear): Sequential(
(0): Dropout(p=0.5, inplace=False)
(1): Linear(in_features=1024, out_features=20001, bias=True)
(2): LogSoftmax()
)
(dropout): Dropout(p=0.5, inplace=False)
(dec): ModuleList(
(0): Embedding(20001, 512)
(1): LSTM(544, 1024, batch_first=True)
(2): Sequential(
(0): Dropout(p=0.5, inplace=False)
(1): Linear(in_features=1024, out_features=20001, bias=True)
(2): LogSoftmax()
)
(3): Linear(in_features=32, out_features=1024, bias=True)
)
(latent_hidden_linear): Linear(in_features=32, out_features=1024, bias=True)
)

Starting epoch 1
Traceback (most recent call last):
File "train_text.py", line 341, in
main(args)
File "train_text.py", line 187, in main
train_nll_vae += nll_vae.data[0]*batch_size
IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number

Any suggestions on how can I fix it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.