Git Product home page Git Product logo

Comments (20)

candlewill avatar candlewill commented on August 16, 2024 2

Coming soon!

from tacotron.

Kyubyong avatar Kyubyong commented on August 16, 2024 1

@xuerq I'm running a sanity-check test. I'll share with you as soon as it's done.

from tacotron.

Durham avatar Durham commented on August 16, 2024

What error value you have achieved so far? Could you please post convergence plots? I might be wrong, but I think current model need some work before it will be able to do synthesize speech from text. For single wav file, one need to get error below 0.08 on average to hear good speech (reached after 400 epoch with total 2400 weight update steps (I put 6 identical files in the list). And I had to change default learning rate to achieve that. For two wav files, I was not able to train it to speak both files, optimization got stuck near 0.10, despite my all efforts to find good learning rate /optimizer. In text-to-text sequence to sequence models if one can't get them to reproduce few training samples exactly, that usually means they won't work for larger sets as well, although there are some exceptions. So debugging model on simple cases probably needed. Maybe do what paper describes as "ablation experiments", using simple GRU encoder and see if it works.

from tacotron.

onyedikilo avatar onyedikilo commented on August 16, 2024

I trained a single wav file, used 2 identical files in the list, changed dropouts to 1.0 and training rate to 0.01. Trained for 1350k steps (I think it was more than 1000 epochs) and loss came down to 0.057. (18h 41m on gtx 1080).

screenshot

Tried to generate a sound with the model using the same text, 3/5 of the file is silent, remaining has some low quality speech. I was trying to overfit the network and see how it would generate.

One thing that I did not understand is that; while the loss is 0.057 on training data, evaluation script shows around 0.58 loss with the same text and wav. Maybe someone can explain the difference between the losses?

from tacotron.

Kyubyong avatar Kyubyong commented on August 16, 2024

The fatc that 3/5 of the generated file is silent looks fine, because we intended to reconstruct them (zero paddings). The training curve looks good, too. When I was training the whole data, the training curve looks messy. Simply, it keeps hanging around 0.2.

from tacotron.

onyedikilo avatar onyedikilo commented on August 16, 2024

The silence was at the beginning of the file not in the end.

I believe it is messy because you are using dropout of 0.5 and learning rate of 0.0001, it should converge in time and the spikes will get smaller and smaller gradually.

from tacotron.

 avatar commented on August 16, 2024

I trained with a single file for about 2000 epochs and got this, where loss1 is the seq2seq loss, and loss2 is the spectrogram loss.
Total training loss was about 0.017.

screenshot from 2017-05-29 17-41-05

from tacotron.

candlewill avatar candlewill commented on August 16, 2024

I trained the model with full data for about 130 epoches. The best loss I got was about 0.14. The loss figures is as follows:

image

Here is the synthesized audio: http://pan.baidu.com/s/1skMStGT

from tacotron.

Spotlight0xff avatar Spotlight0xff commented on August 16, 2024

from tacotron.

candlewill avatar candlewill commented on August 16, 2024

@Spotlight0xff I kept all the hyper parameters unchanged.

from tacotron.

 avatar commented on August 16, 2024

@candlewill how long did it take your machine to reach 180k steps?

from tacotron.

candlewill avatar candlewill commented on August 16, 2024

@minsangkim142 It takes about five days with two Tesla M40 24GB GPUs (just one for computation).

from tacotron.

candlewill avatar candlewill commented on August 16, 2024

New synthesized speech samples here: http://pan.baidu.com/s/1miohdVy

It was trained on a small data. Just Revelation from Bible was used. Epoch 2000. Best loss 0.53.

image

from tacotron.

Kyubyong avatar Kyubyong commented on August 16, 2024

Some human-like voice is heard, though I can't recognize what he(?)'s saying about. (I think it's natural because the data is far from enough)
I've recently revised the code. When did you start training?

from tacotron.

xuerq avatar xuerq commented on August 16, 2024

@candlewill @Kyubyong any new updates ? Thanks!

from tacotron.

root20 avatar root20 commented on August 16, 2024

Does it learn attention when you use only one sample for training?
I'm worried about just memorizing the whole speech sample rather than predicting it from the text input.

from tacotron.

jpdz avatar jpdz commented on August 16, 2024

@candlewill Hi, do you have any suggestions to train the model. I listened to the samples from http://pan.baidu.com/s/1miohdVy. Though the results are not good, it is less noisy than what I synthesized. Really appreciate your answer.

from tacotron.

tuong-olli avatar tuong-olli commented on August 16, 2024

screenshot from 2017-08-21 10-27-21
I train model tacotron with 3 file audio but loss function very high. Data using is Vietnamese

from tacotron.

ashupednekar avatar ashupednekar commented on August 16, 2024

What are the default number of epochs? And where is it in the code?

from tacotron.

giridhar-pamisetty avatar giridhar-pamisetty commented on August 16, 2024

@ashupednekar Did you find it?

from tacotron.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.