Comments (4)
Hey body, I've read the codes of Transformer. That's cool. Here's something I can't understand about the input of decoder. It can be acceptable that we use ' S i want a beer ' as the decoder_input in the training period. However, in the test period, the decoder_input should start with an 'S' and then we use the predicted result of 'S' which passed through decoder as the next input of decoder instead of using the whole translated sentence as the decoder_input. Because you can't use the translated sentence in any part of the model except the last part of comparsion in test, predicting period.
That's what I understand and I have no idea I'm right or wrong since I've seen that the parameters of the forward function of class Transformer include 'dec_inputs'. If I'm right, another function to predict the translated sentences is better to be created. What do you think?
from nlp-tutorial.
Did you mean role of Best-First-Search Decoder
?
from nlp-tutorial.
Yep,just as you coded in Transformer(Greedy_decoder), I ignored it...haha..embarrassed. So, another question, greedy decoder is what we used in real project training and test or just test?
from nlp-tutorial.
Please search about difference between Teacher forcing and Non-Teacher forcing. It'll help you.
It does not matter greedy or not on training, but using teacher forcing make to be more fastly converaged.
from nlp-tutorial.
Related Issues (20)
- seq2seq_torch maybe have a small mistake HOT 2
- about seq2seq(attention)-Torch multiple sample training question
- a question about transformer HOT 1
- BERT-Torch.py may have a small mistake
- Version 2.0 will be updated
- link of NNLM and word2vec is disabled
- CODE
- Question?
- Why is src_len+1 in Transformer demo? HOT 1
- About make_batch of NNLM
- Bi-LSTM attention calc may be wrong HOT 2
- In code 4-1.Seq2Seq might have wrong section
- LongTensor error dim in BiLSTM Attention with new data
- 3-3.Bi-LSTM may have wrong padding
- 5.1 Transformer may have wrong position embed
- BiLstm(tf) maybe have mistake
- Faster attention calculation in 4-2.Seq2Seq? HOT 1
- The Adam in 5-1.Transformer should be replaced by SGD
- The Learning Rate in 5-2.BERT must be reduced.
- Seq2Seq(Attention) may have a mistake
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nlp-tutorial.