Git Product home page Git Product logo

tacotron's Introduction

A (Heavily Documented) TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model

Requirements

  • NumPy >= 1.11.1
  • TensorFlow >= 1.3
  • librosa
  • tqdm
  • matplotlib
  • scipy

Data

We train the model on three different speech datasets.

  1. LJ Speech Dataset
  2. Nick Offerman's Audiobooks
  3. The World English Bible

LJ Speech Dataset is recently widely used as a benchmark dataset in the TTS task because it is publicly available. It has 24 hours of reasonable quality samples. Nick's audiobooks are additionally used to see if the model can learn even with less data, variable speech samples. They are 18 hours long. The World English Bible is a public domain update of the American Standard Version of 1901 into modern English. Its original audios are freely available here. Kyubyong split each chapter by verse manually and aligned the segmented audio clips to the text. They are 72 hours in total. You can download them at Kaggle Datasets.

Training

  • STEP 0. Download LJ Speech Dataset or prepare your own data.
  • STEP 1. Adjust hyper parameters in hyperparams.py. (If you want to do preprocessing, set prepro True`.
  • STEP 2. Run python train.py. (If you set prepro True, run python prepro.py first)
  • STEP 3. Run python eval.py regularly during training.

Sample Synthesis

We generate speech samples based on Harvard Sentences as the original paper does. It is already included in the repo.

  • Run python synthesize.py and check the files in samples.

Training Curve

Attention Plot

Generated Samples

Pretrained Files

  • Keep in mind 200k steps may not be enough for the best performance.
  • LJ 200k
  • WEB 200k

Notes

  • It's important to monitor the attention plots during training. If the attention plots look good (alignment looks linear), and then they look bad (the plots will look similar to what they looked like in the begining of training), then training has gone awry and most likely will need to be restarted from a checkpoint where the attention looked good, because we've learned that it's unlikely that the loss will ever recover. This deterioration of attention will correspond with a spike in the loss.

  • In the original paper, the authors said, "An important trick we discovered was predicting multiple, non-overlapping output frames at each decoder step" where the number of of multiple frame is the reduction factor, r. We originally interpretted this as predicting non-sequential frames during each decoding step t. Thus were using the following scheme (with r=5) during decoding.

    t    frame numbers
    -----------------------
    0    [ 0  1  2  3  4]
    1    [ 5  6  7  8  9]
    2    [10 11 12 13 14]
    ...
    

    After much experimentation, we were unable to have our model learning anything useful. We then switched to predicting r sequential frames during each decoding step.

    t    frame numbers
    -----------------------
    0    [ 0  1  2  3  4]
    1    [ 5  6  7  8  9]
    2    [10 11 12 13 14]
    ...
    

    With this setup we noticed improvements in the attention and have since kept it.

  • Perhaps the most important hyperparemeter is the learning rate. With an intitial learning rate of 0.002 we were never able to learn a clean attention, the loss would frequently explode. With an initial learning rate of 0.001 we were able to learn a clean attention and train for much longer get decernable words during synthesis.

  • Check other TTS models such as DCTTS or deep voice 3.

Differences from the original paper

  • We use Noam style warmup and decay.
  • We implement gradient clipping.
  • Our training batches are bucketed.
  • After the last convolutional layer of the post-processing net, we apply an affine transformation to bring the dimensionality up to 128 from 80, because the required dimensionality of highway net is 128. In the original highway networks paper, the authors mention that the dimensionality of the input can also be increased with zero-padding, but they used the affine transformation in all their experiments. We do not know what the Tacotron authors chose.

Papers that referenced this repo

Jan. 2018, Kyubyong Park & Tommy Mulc

tacotron's People

Contributors

betterenvi avatar candlewill avatar kyubyong avatar mimbres avatar spotlight0xff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tacotron's Issues

librosa version issues : 'Size mismatch between n_fft and window size'

If librosa version < 0.5, error can be reported, it is just version problem.

with 0.4.3 version, even though 'hann' window is set as default, the function raises error.

Just update librosa to last version(current version is 0.5), or make it clear correcting hop_length = hp.hop_length and delete window="hann"

def invert_spectrogram(spectrogram):
return librosa.istft(spectrogram.T, hp.hop_length, win_length=hp.win_length, window="hann")

def invert_spectrogram(spectrogram):
return librosa.istft(spectrogram.T, hop_length=hp.hop_length, win_length=hp.win_length)

eval.py error

`in ~/tacotron
± |master ?:1 ✗| → python eval.py
Graph loaded
2017-06-11 04:22:40.230524: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-11 04:22:40.230560: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-11 04:22:40.230565: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-11 04:22:40.230590: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-11 04:22:40.230596: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-06-11 04:22:40.319841: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-06-11 04:22:40.320116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 with properties:
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:01:00.0
Total memory: 3.95GiB
Free memory: 3.60GiB
2017-06-11 04:22:40.320130: I tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-06-11 04:22:40.320134: I tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-06-11 04:22:40.320153: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M, pci bus id: 0000:01:00.0)
WARNING:tensorflow:Standard services need a 'logdir' passed to the SessionManager
2017-06-11 04:23:51.726739: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_2/dense2/bias not found in checkpoint
2017-06-11 04:23:51.726739: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_3/dense1/bias not found in checkpoint
2017-06-11 04:23:51.726877: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_2/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.727398: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_2/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.728005: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_2/dense1/bias not found in checkpoint
2017-06-11 04:23:51.728093: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_8/normalize/beta not found in checkpoint
2017-06-11 04:23:51.728186: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/prenet/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.731562: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/prenet/dense2/bias not found in checkpoint
2017-06-11 04:23:51.731562: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.732466: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/dense/bias not found in checkpoint
2017-06-11 04:23:51.732960: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_3/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.733981: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.735061: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_3/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.735164: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_3/dense2/bias not found in checkpoint
2017-06-11 04:23:51.736522: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_1/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.737246: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.737754: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/prenet/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.737792: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/normalize/beta not found in checkpoint
2017-06-11 04:23:51.738832: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_3/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.739028: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/dense/kernel not found in checkpoint
2017-06-11 04:23:51.739286: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.739904: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/memory_layer/kernel not found in checkpoint
2017-06-11 04:23:51.740150: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_2/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.741373: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.741403: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/normalize/beta not found in checkpoint
2017-06-11 04:23:51.742257: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_3/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.742330: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/memory_layer/bias not found in checkpoint
2017-06-11 04:23:51.742398: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/prenet/dense1/bias not found in checkpoint
2017-06-11 04:23:51.742465: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/query_layer/bias not found in checkpoint
2017-06-11 04:23:51.743168: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.748775: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/query_layer/kernel not found in checkpoint
2017-06-11 04:23:51.748925: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/rnn/attention_v not found in checkpoint
2017-06-11 04:23:51.750577: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_10/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.750620: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.750685: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_3/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.751345: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.751419: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/rnn/gru_cell/candidate/weights not found in checkpoint
2017-06-11 04:23:51.751461: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_10/normalize/beta not found in checkpoint
2017-06-11 04:23:51.752684: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/rnn/gru_cell/candidate/biases not found in checkpoint
2017-06-11 04:23:51.756826: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_3/normalize/beta not found in checkpoint
2017-06-11 04:23:51.756855: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_10/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.757826: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_10/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.758558: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_10/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.758617: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_11/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.758697: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.759363: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_11/normalize/beta not found in checkpoint
2017-06-11 04:23:51.759437: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/rnn/gru_cell/gates/weights not found in checkpoint
2017-06-11 04:23:51.761010: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_11/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.762551: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_11/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.762665: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/attention_decoder/rnn/gru_cell/gates/biases not found in checkpoint
2017-06-11 04:23:51.762733: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_3/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.763029: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru1/rnn/gru_cell/candidate/biases not found in checkpoint
2017-06-11 04:23:51.763490: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.766689: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_12/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.768067: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_11/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.768132: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_12/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.769356: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru1/rnn/gru_cell/candidate/weights not found in checkpoint
2017-06-11 04:23:51.769408: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_12/normalize/beta not found in checkpoint
2017-06-11 04:23:51.770004: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru1/rnn/gru_cell/gates/biases not found in checkpoint
2017-06-11 04:23:51.770070: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/normalize/beta not found in checkpoint
2017-06-11 04:23:51.770872: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru1/rnn/gru_cell/gates/weights not found in checkpoint
2017-06-11 04:23:51.771978: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_2/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.772033: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_12/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.772155: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_13/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.773207: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_3/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.773297: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_12/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.774065: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru2/rnn/gru_cell/candidate/biases not found in checkpoint
2017-06-11 04:23:51.774094: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru2/rnn/gru_cell/candidate/weights not found in checkpoint
2017-06-11 04:23:51.774841: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru2/rnn/gru_cell/gates/biases not found in checkpoint
2017-06-11 04:23:51.774933: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_3/dense2/bias not found in checkpoint
2017-06-11 04:23:51.777732: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_13/normalize/beta not found in checkpoint
2017-06-11 04:23:51.778343: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/decoder_gru2/rnn/gru_cell/gates/weights not found in checkpoint
2017-06-11 04:23:51.778528: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/dense/bias not found in checkpoint
2017-06-11 04:23:51.778903: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_2/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.779435: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_13/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.779962: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_13/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.781445: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_3/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.781533: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/dense/kernel not found in checkpoint
2017-06-11 04:23:51.781463: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_13/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.782194: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_14/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.782282: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_14/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.782316: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_2/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.782341: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_14/normalize/beta not found in checkpoint
2017-06-11 04:23:51.783085: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_8/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.783132: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_14/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.783198: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_3/dense1/bias not found in checkpoint
2017-06-11 04:23:51.784151: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_8/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.786025: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_8/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.786712: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_14/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.788681: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_15/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.788736: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_2/normalize/beta not found in checkpoint
2017-06-11 04:23:51.788836: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_15/normalize/beta not found in checkpoint
2017-06-11 04:23:51.789438: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_2/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.791091: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_15/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.793944: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/dense/kernel not found in checkpoint
2017-06-11 04:23:51.794508: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_2/dense2/bias not found in checkpoint
2017-06-11 04:23:51.794734: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/dense/bias not found in checkpoint
2017-06-11 04:23:51.795663: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/bw/gru_cell/candidate/weights not found in checkpoint
2017-06-11 04:23:51.795714: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_2/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.795780: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/bw/gru_cell/candidate/biases not found in checkpoint
2017-06-11 04:23:51.797021: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/bw/gru_cell/gates/biases not found in checkpoint
2017-06-11 04:23:51.797069: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_16/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.798374: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_2/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.798547: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.799517: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_15/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.799608: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_15/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.800093: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/fw/gru_cell/candidate/biases not found in checkpoint
2017-06-11 04:23:51.800156: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_16/normalize/beta not found in checkpoint
2017-06-11 04:23:51.801278: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_2/dense1/bias not found in checkpoint
2017-06-11 04:23:51.801840: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/fw/gru_cell/gates/weights not found in checkpoint
2017-06-11 04:23:51.801920: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/bw/gru_cell/gates/weights not found in checkpoint
2017-06-11 04:23:51.801947: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.802877: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_16/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.802956: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/fw/gru_cell/gates/biases not found in checkpoint
2017-06-11 04:23:51.802997: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/gru/bidirectional_rnn/fw/gru_cell/candidate/weights not found in checkpoint
2017-06-11 04:23:51.804123: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_0/dense1/bias not found in checkpoint
2017-06-11 04:23:51.804394: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.805234: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_1/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.805775: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_16/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.806277: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_0/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.806354: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_0/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.806388: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_1/dense2/bias not found in checkpoint
2017-06-11 04:23:51.807468: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_2/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.807864: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_16/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.807918: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_0/dense2/bias not found in checkpoint
2017-06-11 04:23:51.807991: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_2/normalize/beta not found in checkpoint
2017-06-11 04:23:51.809839: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_1/dense1/bias not found in checkpoint
2017-06-11 04:23:51.810985: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_2/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.811027: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/normalize/beta not found in checkpoint
2017-06-11 04:23:51.811150: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_1/dense2/bias not found in checkpoint
2017-06-11 04:23:51.811526: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_1/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.811715: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_1/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.812429: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_3/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.812795: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_3/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.813020: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_2/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.813227: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_2/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.813480: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_3/normalize/beta not found in checkpoint
2017-06-11 04:23:51.813531: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.814402: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/highwaynet_1/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.817649: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_3/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.818524: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_3/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.818568: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_4/normalize/beta not found in checkpoint
2017-06-11 04:23:51.818709: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_2/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.818765: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_1/dense1/bias not found in checkpoint
2017-06-11 04:23:51.819173: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_4/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.819656: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_4/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.819738: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_4/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.821486: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_1/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.821552: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_4/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.822615: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_4/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.823723: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_5/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.824655: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_4/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.824797: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/prenet/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.825653: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_0/dense2/kernel not found in checkpoint
2017-06-11 04:23:51.825751: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_4/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.826833: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/prenet/dense2/bias not found in checkpoint
2017-06-11 04:23:51.827897: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_5/normalize/beta not found in checkpoint
2017-06-11 04:23:51.828765: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_5/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.828771: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_0/dense2/bias not found in checkpoint
2017-06-11 04:23:51.828834: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_5/normalize/beta not found in checkpoint
2017-06-11 04:23:51.828888: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_4/normalize/beta not found in checkpoint
2017-06-11 04:23:51.829065: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_4/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.830252: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_5/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.831061: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/prenet/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.831519: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_5/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.831819: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_5/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.834723: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_0/dense1/kernel not found in checkpoint
2017-06-11 04:23:51.834823: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_6/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.835854: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_5/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.836123: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_6/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.836597: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_6/normalize/beta not found in checkpoint
2017-06-11 04:23:51.836692: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_6/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.836778: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder1/prenet/dense1/bias not found in checkpoint
2017-06-11 04:23:51.837524: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_5/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.838518: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_6/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.838617: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_7/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.839402: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/highwaynet_0/dense1/bias not found in checkpoint
2017-06-11 04:23:51.839452: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_5/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.840713: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_7/normalize/beta not found in checkpoint
2017-06-11 04:23:51.840798: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_6/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.840875: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_8/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.841344: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_6/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.841399: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_6/normalize/beta not found in checkpoint
2017-06-11 04:23:51.841449: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_7/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.843741: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_7/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.843952: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_7/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.844243: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/fw/gru_cell/gates/weights not found in checkpoint
2017-06-11 04:23:51.845037: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key decoder2/conv1d_banks/num_7/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.845137: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_7/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.845398: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_6/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.846067: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_7/normalize/beta not found in checkpoint
2017-06-11 04:23:51.846391: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/fw/gru_cell/gates/biases not found in checkpoint
2017-06-11 04:23:51.846549: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_7/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.846789: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_8/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.847836: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_6/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.847907: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_8/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.848012: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_7/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.849559: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_8/normalize/beta not found in checkpoint
2017-06-11 04:23:51.849627: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_8/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.849656: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_8/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.852251: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_9/normalize/moving_mean not found in checkpoint
2017-06-11 04:23:51.852984: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_9/conv1d/conv1d/kernel not found in checkpoint
2017-06-11 04:23:51.853197: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/fw/gru_cell/candidate/weights not found in checkpoint
2017-06-11 04:23:51.854297: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_9/normalize/gamma not found in checkpoint
2017-06-11 04:23:51.854955: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_9/normalize/beta not found in checkpoint
2017-06-11 04:23:51.855167: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/bw/gru_cell/gates/biases not found in checkpoint
2017-06-11 04:23:51.855385: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/bw/gru_cell/gates/weights not found in checkpoint
2017-06-11 04:23:51.855455: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/embedding/lookup_table not found in checkpoint
2017-06-11 04:23:51.855525: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/conv1d_banks/num_9/normalize/moving_variance not found in checkpoint
2017-06-11 04:23:51.855979: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/bw/gru_cell/candidate/biases not found in checkpoint
2017-06-11 04:23:51.856067: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/bw/gru_cell/candidate/weights not found in checkpoint
2017-06-11 04:23:51.856384: W tensorflow/core/framework/op_kernel.cc:1152] Not found: Key encoder/gru/bidirectional_rnn/fw/gru_cell/candidate/biases not found in checkpoint
Traceback (most recent call last):
File "eval.py", line 71, in
eval()
File "eval.py", line 38, in eval
sv.saver.restore(sess, tf.train.latest_checkpoint(hp.logdir))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1457, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key decoder2/highwaynet_2/dense2/bias not found in checkpoint
[[Node: save/RestoreV2_87 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_87/tensor_names, save/RestoreV2_87/shape_and_slices)]]
[[Node: save/RestoreV2_22/_631 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_1470_save/RestoreV2_22", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

Caused by op u'save/RestoreV2_87', defined at:
File "eval.py", line 71, in
eval()
File "eval.py", line 35, in eval
sv = tf.train.Supervisor()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/supervisor.py", line 300, in init
self._init_saver(saver=saver)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/supervisor.py", line 446, in _init_saver
saver = saver_mod.Saver()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1056, in init
self.build()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1086, in build
restore_sequentially=self._restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 691, in build
restore_sequentially, reshape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 407, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 247, in restore_op
[spec.tensor.dtype])[0])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 669, in restore_v2
dtypes=dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in init
self._traceback = _extract_stack()

NotFoundError (see above for traceback): Key decoder2/highwaynet_2/dense2/bias not found in checkpoint
[[Node: save/RestoreV2_87 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_87/tensor_names, save/RestoreV2_87/shape_and_slices)]]
[[Node: save/RestoreV2_22/_631 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_1470_save/RestoreV2_22", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

`

Error in running eval.py

I have trained a single person's speech. when i run python eval.py, i get this error:

Traceback (most recent call last):
File "eval.py", line 70, in
eval()
File "eval.py", line 39, in eval
sv.saver.restore(sess, tf.train.latest_checkpoint(hp.logdir))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1548, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key net/encoder/norm2/Variable not found in checkpoint
[[Node: save/RestoreV2_117 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_117/tensor_names, save/RestoreV2_117/shape_and_slices)]]
[[Node: save/RestoreV2_71/_453 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_442_save/RestoreV2_71", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

Error executing train.py

I am getting an error when I execute train.py

Traceback (most recent call last): File "train.py", line 277, in <module> main() File "train.py", line 255, in main g = Graph(is_training=True); print("Training Graph loaded") File "train.py", line 234, in __init__ self.memory = encode(self.x, is_training=is_training) File "/SpeakerID-IIT/tts/voices/taco/tacotron-master/networks.py", line 47, in encode memory = gru(enc, hp.embed_size//2, True) # (N, T, 128*2) File "/SpeakerID-IIT/tts/voices/taco/tacotron-master/modules.py", line 103, in gru outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell, cell_bw, inputs, dtype=tf.float32) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 363, in bidirectional_dynamic_rnn seq_dim=time_dim, batch_dim=batch_dim) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 2346, in reverse_sequence name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 2776, in reverse_sequence batch_dim=batch_dim, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 504, in apply_op values, as_ref=input_arg.is_ref).dtype.name File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 360, in make_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported.

Dropbox [SOLVED]

Can you share your dropbox already cut training files? Thanks

Global steps vs epochs

Can someone explain to me the difference between epochs and global steps? I'm training a few thousand epochs, but the global steps stay low.

Error running eval.py

I have trained the model with full set (bible) DB and I got model_epoch_40_gs_80000.* files in "logdir_s" directory. When I tried with eval.py I got the following "Size mismatch between n_fft and window size" error.

Anybody experienced the similar problem?

$ python eval.py
...
2017-07-11 19:29:59.842658: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
WARNING:tensorflow:Standard services need a 'logdir' passed to the SessionManager
Restored!
Traceback (most recent call last):
File "eval.py", line 69, in
eval()
File "eval.py", line 62, in eval
audio = spectrogram2wav(np.power(np.e, s)**hp.power)
File "/home/abc/Tacotron/tacotron/utils.py", line 93, in spectrogram2wav
X_t = invert_spectrogram(X_best)
File "/home/abc/Tacotron/tacotron/utils.py", line 105, in invert_spectrogram
return librosa.istft(spectrogram, hp.hop_length, win_length=hp.win_length, window="hann")
File "/usr/local/lib/python2.7/dist-packages/librosa/core/spectrum.py", line 294, in istft
raise ParameterError('Size mismatch between n_fft and window size')
librosa.util.exceptions.ParameterError: Size mismatch between n_fft and window size

Not training

Hi guys,

for some reason this bit of code is not running on our dataset
`

for step in tqdm(range(g.num_batch), total=g.num_batch, ncols=70, leave=False, unit='b'):
sess.run(g.train_op)

`
we got about 170 audio files about 30 seconds each. The script finishes running the main program with the print("Done"), but I don't think there's any training happening. Any thoughts?

how is the performance?

can you provide some result trained from your code? I'm also trying to reimplement this paper, So I wana to compare your and mine to check weather some details are missed~ thank you.

Dimensionality change

In the tacotron paper, the expected dimensionality of the tensor during most of the encoder cbhg module is expected to be batch size x time steps x num features where num features = 128. Why do you shift to 256 before the bidirectional GRU layer? It seems like this would result in a real loss of information during encoding. Is this somehow what is described in the paper and I am just missing it? Thanks

Data amount

Despite there're more than 30000 data sets available, only 12326 pairs of data is used (the length is between 10 and 100).

This might be not so important, but you could make the better use of training data by changing the size limit.

Using only two decoder GRU?

Hi,
in network.py function decode1,

dec_ = attention_decoder(dec, hp.embed_size, memory, variable_scope="attention_decoder1") # (N, T', 256)
dec = dec_ + attention_decoder(dec_, hp.embed_size, memory, variable_scope="attention_decoder2") # (N, T', 256) # residual connections

In case of Tacotron, it uses attention RNN(1 layer GRU) and decoder RNN(2layer GRU). Attention in Tacotron scores memory(from encoder GRU) with query (from attention RNN) and makes context vector.
This context vector is concatenated with attention RNN's cell output to form decoder RNN's input.
Figure 1 in linked paper's architecture seems to be the same with Tacotron except that it has 8 layers.(https://arxiv.org/pdf/1609.08144.pdf)

Context vector should be shared for two decoder layers, but two attention decoder with different variable_scope will calculate context vector twice.

raising the predicted magnitudes by a power of 1.2, not input magnitudes

I may be wrong with this, because I haven't time to study this in detail, but paper says: "raising the predicted magnitudes by a power of 1.2 before feeding to Griffin-Lim reduces artifacts, likely due to its harmonic enhancement effect". In the code, I see that input values (training data) magnitudes are raised by 1.2, but from statement above, it seems that it should be done to output values, just before spectrogram2wav. Nothing is said about raising input magnitudes by power of 1.2.

Difference Between Current Code and Original Paper

  1. Learning rate decay. In the original paper, the learning rate decay starts from 0.001 and is reduced to 0.0005, 0.0003, and 0.0001 after 500K, 1M and 2M global steps respectively. While the code uses a fixed learning rate of 0.001.

  2. no batch-norm for conv1d in encoder (#12).

  3. wrong size of conv1d in CBHG, Post-processing net (#13)

  4. CBHG structure in post-processing net does not use residual connection. This may be a compromise, because the residuals are added only if the dimensions are the same. The original paper is unclear.

  5. The last layer of decoder uses a fully connected layer to predict the mel spectrogram. The paper says that it is an important trick to predict r frames at each decoder step. It is unclear whether T = T' or T! = T' in the process of [N, T, C] -> [N, T ', C * r]. The code keeps T=T', but it is also possible that T' = T / r with frame reduction.

  6. Decoder input problem. The paper says, in inference, only the last frame of the r predictions is fed into the decoder (except for the last step). However, the code uses all of the r frames. During training, there are the same problem, every r-th ground truth frame is fed into the decoder, rather than all of the r frames.

Voice Synthesis Discussion Group

As we're beginning to see some good results now, I've created a group for anyone who's interested in discussing results, improvements and future developments. This should help keep discussions and bugs/issues separate, and make them easier to manage.

Feel free to join in at: https://gitter.im/voice-synthesis/tacotron

I might create sub-topics for a few other popular voice synthesis implementations too, like Deep Voice, Merlin and char2wav.

post-processing net with pre-net??

I did not find any hints in the original paper that in the post-processing module a pre-net is added before CBHG, so why the author add one in the code 'tacotron/networks.py.decode2' ??

Generating natural speech, reducing noise

The published samples seem to have very low background noise - is this a result of the 2 million training steps mentioned in the paper progressively reducing the non-signal parts of the output to silence?

Or is the silence achieved by some other post-processing, like a denoising autoencoder or a low-pass filter?

What would still need to be implemented to enable this code to generate natural sounding, non-robotic speech? I'd be interested to hear your thoughts, and helping out if I can.

good results

https://github.com/ggsonic/tacotron/blob/master/10.mp3
based on your code, i can get clear voices,like the one above.
text is : The seven angels who had the seven trumpets prepared themselves to sound.
you can hear some of the words clearly.
the main changes are about 'batch_norm',i use instance normalization instead. And i think there are problems in the batch_norms.
And there may have something wrong about hp.r related data flows. But i don't have time to figure it out for now.
later this week i will commit my code and thanks your great works!

AttributeError: 'Graph' object has no attribute 'y'

Using the latest version, the program "eval.py" throws an error


Traceback (most recent call last):
  File "eval.py", line 70, in <module>
    eval()
  File "eval.py", line 26, in eval
    g = Graph(is_training=False)
  File "tacotron\train.py", line 35, in __init__
    self.decoder_inputs = shift_by_one(self.y)
AttributeError: 'Graph' object has no attribute 'y'

Inefficient RAM usage?

Hey there! First of all: thanks for the amazing work!

I've hit the problem that my train.py process gets killed by the Linux kernels OOM killer. My question is: Has someone experienced the same? I guess there's some kind of inefficient RAM usage (I suspect the normalization computations, though I'm just starting to look into it).

I used a different data set but the rest of the code is exactly the same.

Thanks in advance.

EDIT: Forgot to mention that this happens though I'm running on gpu!

How many epochs?

Currently, I'm running the train.py. The paper says "..., which starts from 0.001 and is reduced to 0.0005, 0.0003, and 0.0001 after 500K, 1M and 2M global steps". 2 million?! So I think I need to be patient. How many epochs do I have to run until some human-like samples are generated? Have you guys tried for more than 100 epochs?

def load_vocab(): Is the vocab variable set properly?

@Kyubyong
According to the comments, there should be a capital 'S' on the vocab variable, however, there is no 'S' but we have a " ' " (apostrophe).

def load_vocab():
vocab = "E abcdefghijklmnopqrstuvwxyz'" # E: Empty, S: end of Sentence
char2idx = {char:idx for idx, char in enumerate(vocab)}
idx2char = {idx:char for idx, char in enumerate(vocab)}
return char2idx, idx2char

If this is an error, does it affect the performance?
If it is not an error, can you clarify it?

Thank you

Please share your models!

Hey deep learning/tacotron enthusiasts!

After having done a sanity-check, I'm getting busier with other projects (and my GPUs are so, too). Since quite many guys are interested in tacotron and this humble repo, I hope you guys run my code, or your own tweaked version and share the specifications, results, and pretrained model files. (Particularly I'm curious if the multiple gpu version works) I'd like to organize them at the README file so that everyone can easily see.

Help load_vocab function!

prepro.py

def load_vocab():
    vocab = "EG abcdefghijklmnopqrstuvwxyz'" # E: Empty. ignore G
    char2idx = {char:idx for idx, char in enumerate(vocab)}
    idx2char = {idx:char for idx, char in enumerate(vocab)}
    return char2idx, idx2char

This load_vacab surpport only char [a-z']. I saw tacotron possible interpreting 256 characters. Why using char2idx instead of ascii function(ord)?

Empty generated waves

Hi,
Thank you so much for your great work!

To save training time, I took the wave files under Genesis folder, and updated the text.csv file accordingly.
Train was done for these 1532 files successfully as shown below:

python train.py
Training Graph loaded
2017-06-08 10:35:51.200597: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-08 10:35:51.200628: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-08 10:35:51.200633: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Done

Then, I ran the evaluation script which generated a text file with 31 sentences and their waves.
python eval.py
Graph loaded
2017-06-08 13:22:30.452388: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-08 13:22:30.452540: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-08 13:22:30.452657: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
WARNING:tensorflow:Standard services need a 'logdir' passed to the SessionManager
Restored!
Done

The problem is that all the generated wave files are almost empty with some noise in the middle until the end.

This is one of the these generated files:
(model_epoch_200_gs_1000_0.zip)

Any idea how to fix this issue?
Many thanks,
Hamdy

Where do Timesteps = 155 come from?

I trained the model with various sound files, it always complains about the size of the timesteps when evaluating, i need to change it everytime finding the value from the debugger error.

Is it me or everyone does the same?

Where does that number come from? Default being 155?

Character Embedding

In this implementation, is input sentence raw text rather than 256 embedded values? Does the idea (using raw text) derive from the smaller size of character set?
I thought character embedding in the original article meant this kind of method.

Low GPU usage

I train the model on my two Tesla M40 GPU. When I use nvidia-smi command to check the GPU usage, it always keeps on a low usage, and just one GPU is used.

How can I full use the two GPUs?

I've tried to increase the queue capacity and thread numbers, but it helps little.

Is post-processing net useful?

I have trained the full data set with 40 epochs. My mean_loss curve is like this:
7993cbee7ddf967d6070ef2289e0568

But i find the mean loss 1 is very small.
image

what's the difference between mean-loss1 and mean-loss2?

have you tryed training with the full dataset?

1gpu

this result is training on single gpu.

8gpu

this result is training on 8 gpu. (BTW: you need to set BN = None, or you would got a strange result , because the batch normal problem on multi gpu. )

it seems the adam optimizer problem.

Is tacotron inference real-time?

I trained model with one sample. The sample results from eval.py are completely noisy but they are recognizable human speech. A sample result generation take the following times (Input: 80 time step(4s)):

encoding_decoding: 23.3s
spectrogram2wav(): 0.96s

this delay is more than real-time. Isn't tacotron inference real-time?

A question for the concat process when forming the input to the decode RNNs

Hi. Thanks for your codes to help me study a lot, while I have a question after reading you codes. In chapter 3.3, the second sentence, "We concatenate the context vector and the attention RNN cell output to form the input to the decoder RNNS." In your codes, I can not find the concat process. Do you forget to do or you think it is not necessary?

single GPU is better?

6-gpu

batch_size = 32*6 and num_gpus = 6(train_multi_gpus.py)

1-gpu

batch_size = 32 (train.py)

it seems the single gpu is mush faster and the training curve is smoother.

eval.py: Evaluation broken

Hey Guys, the last commits from today seems to break evaluation code. More precisely spectrogram2wav.

If you're not using log magnitudes, eval.py crashes during sampling because the buffer to generate the audio from is not finite. I checked eval.py against the previous revision. Assuming that you're not using log magnitudes (ie hp.use_log_magnitude = False), this here is the change:

audio = spectrogram2wav(np.power(np.e, s)**hp.power)

changes to

audio = spectrogram2wav(s**hp.power)

I'm not quite sure since I'm a bit confused by all the log vs non-log discussions going on, but I guess this is wrong. The paper says they're predicting mel-scale. Correct me, but isn't mel-scale already log?
So my assumption is that the first line is still valid for non-log magnitudes, right?

memory leak?

running the train.py after 1d 23h, it use lots of ram, and the training speed from 2.23b/s down to 18.5b/s

top - 08:47:16 up 2 days, 18:20, 2 users, load average: 10.53, 8.43, 7.01
Tasks: 248 total, 1 running, 247 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.4 us, 0.4 sy, 0.0 ni, 77.5 id, 20.7 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16363244 total, 329000 free, 15569480 used, 464764 buff/cache
KiB Swap: 16707580 total, 10609236 free, 6098344 used. 275080 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21894 kyo 20 0 47.847g 0.014t 171628 S 15.2 93.2 1742:11 python

Python train.py gives me ValueError: None values not supported

Hi, I'm facing this issue since past two days trying to run the program. I'm running it on Ubuntu 16.04 with Nvidia 965m.
I have tried following ways:
Used both Python 2.7 and Python 3
Used tensorflow version 1.0.0.

Could you please help me resolve the issue? Here is my complete output:

Tensor("encoder/highwaynet_3/add:0", shape=(32, ?, 128), dtype=float32) Traceback (most recent call last): File "train.py", line 93, in <module> main() File "train.py", line 73, in main g = Graph(); print("Training Graph loaded") File "train.py", line 37, in __init__ self.memory = encode(self.x, is_training=is_training) # (N, T, E) File "/home/user/tacotron/networks.py", line 57, in encode memory = gru(enc, hp.embed_size//2, True) # (N, T, E) File "/home/user/tacotron/modules.py", line 209, in gru outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell, cell_bw, inputs, dtype=tf.float32) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 363, in bidirectional_dynamic_rnn seq_dim=time_dim, batch_dim=batch_dim) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 2346, in reverse_sequence name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 2776, in reverse_sequence batch_dim=batch_dim, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 504, in apply_op values, as_ref=input_arg.is_ref).dtype.name File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 360, in make_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported.

Thanks in advance

Is it only for CPU training?

I've been trying to train using the GPU but so far I haven't been successful.

I changed the "with tf.device('cpu:0'):" command into with tf.device('gpu:0'): in the function def get_batch(is_training=True): defined in the train.py file, but I got an error:

InvalidArgumentError (see above for traceback): Cannot assign a device to node 'batch': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
Colocation Debug Info:
Colocation group had the following types and devices:
QueueEnqueueV2: CPU
QueueSizeV2: CPU
QueueCloseV2: CPU
QueueDequeueManyV2: CPU
PaddingFIFOQueueV2: CPU
[[Node: batch = QueueDequeueManyV2[component_types=[DT_INT32, DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/device:GPU:0"](batch/padding_fifo_queue, batch/n)]]

I also modified in the main() function right after sv.managed_session

    with sv.managed_session() as sess:
        with tf.device("/gpu:0"):

But still it does not use the GPUs

What must be done to enable GPUs trainig?

simple error

a simple error in modules.py,line 180,
output = conv1d(inputs, k, hp.embed_size//2, 1)

should be
output = conv1d(inputs, hp.embed_size//2, k)

train_multi_gpus.py error

when i run train_multi_gpus.py, i get this error:
Traceback (most recent call last):
File "train_multi_gpus.py", line 137, in
main()
File "train_multi_gpus.py", line 117, in main
g = Graph(); print("Training Graph loaded")
File "train_multi_gpus.py", line 58, in init
is_training=is_training) # (N, T', hp.n_mels*hp.r)
File "/data1/zuoxiang/tacotron/networks.py", line 85, in decode1
dec = attention_decoder(dec, memory, num_units=hp.embed_size) # (N, T', E)
File "/data1/zuoxiang/tacotron/modules.py", line 247, in attention_decoder
probability_fn=tf.nn.softmax)
TypeError: init() got an unexpected keyword argument 'probability_fn'

Working example

Hi, what is a working example on the bible dataset? Does the last commit works on this dataest?

Padding Error for Log Magnitude

It is very suitable to use tf.train.batch to prepare a batch of data for training, as this method could pad automatictly to the maximum shape of the tensors.

However, this padding takes value 0 when dynamic_pad=True. For log magnitude, we should pad it with -inf.

I don't know how to correct it more elegant. Anyone could help fix this?

Code

Exploding train loss

It was previously here that in cases where train loss exploded it helps reverting to a point before and lowering the learning rate. Are we talking about training epoch loss? or per batch loss?
What is a recommended lr decreasing for that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.