Comments (7)
A couple of follow-up questions:
- Did you change any of the hyper-parameters?
- Did you use the latest version of the repo?
- Did you build the sentence id tensor?
from pytorch_gbw_lm.
I encountered the same problem as maydaygmail.
- I didn't change any of the hyper-parameters.
- Yes, I use the latest code.
- Yes, I build the sentence id tensor using process_gbw.py.
I debuged the code, and the length returned from this line "start_idx, end_idx, length = self.sentence_id[seq_id]" is 0.
from pytorch_gbw_lm.
I pushed a fix for the issue. The size of the SID tensor should be the number of sentences in the corpus. It was mistakenly set to the number of words in the corpus.
from pytorch_gbw_lm.
This issue is fixed after I pulled the newest code.
However, the loss and PPL drop so quickly when I use tied mode. Is it normal? I think the PPL should above 30.
python main.py --tied
load word frequency mapping - complete
loaded tensor torch.Size([798949912])
loaded tensor torch.Size([30301027, 2])
#sentences 30301027
load train data - complete
#sentences 6073
load test data - complete
| epoch 1 | 0/312089 batches | lr 0.10 | ms/batch 0.47 | loss 13.33 | ppl 618095.27
| epoch 1 | 1000/312089 batches | lr 0.10 | ms/batch 153.60 | loss 6.06 | ppl 427.51
| epoch 1 | 2000/312089 batches | lr 0.10 | ms/batch 153.57 | loss 3.31 | ppl 27.52
| epoch 1 | 3000/312089 batches | lr 0.10 | ms/batch 153.56 | loss 2.40 | ppl 11.04
| epoch 1 | 4000/312089 batches | lr 0.10 | ms/batch 153.64 | loss 1.94 | ppl 6.99
| epoch 1 | 5000/312089 batches | lr 0.10 | ms/batch 153.64 | loss 1.67 | ppl 5.31
| epoch 1 | 6000/312089 batches | lr 0.10 | ms/batch 153.60 | loss 1.49 | ppl 4.42
| epoch 1 | 7000/312089 batches | lr 0.10 | ms/batch 153.64 | loss 1.35 | ppl 3.87
| epoch 1 | 8000/312089 batches | lr 0.10 | ms/batch 153.64 | loss 1.26 | ppl 3.51
| epoch 1 | 9000/312089 batches | lr 0.10 | ms/batch 153.63 | loss 1.18 | ppl 3.25
| epoch 1 | 10000/312089 batches | lr 0.10 | ms/batch 153.61 | loss 1.12 | ppl 3.06
from pytorch_gbw_lm.
The settings for the projection matrix didn't work well.
I changed the settings for the projection matrix from LSTM 2048->256 to LSTM 1024->256.
Also, there was another small mistake in the process_gbw script.
I ran a quick test to check everything, and the model has 63.44 perplexity after the first epoch.
from pytorch_gbw_lm.
The size of the projection matrix seems not help the PPL drop so quickly. Can you give more details on the mistake of process_gbw script?
from pytorch_gbw_lm.
I forgot to fill the sid tensor, so the start_idx and length values are random.
I pushed the fix, but you'll have to rerun the process_gbw script.
from pytorch_gbw_lm.
Related Issues (15)
- how to build Log_Uniform Sampler? HOT 1
- dead link (Google Billion Word Dataset for Torch) HOT 1
- Is the dataset offline? HOT 3
- Resume Training? HOT 2
- missing train_data.pt
- missing dataset
- state of the art performance? HOT 8
- Nondeterministic result? HOT 1
- sample_ids being ignored? HOT 2
- ImportError: cannot import name 'LogUniformSampler' HOT 4
- build Log_Uniform Sampler HOT 1
- Pretrained Model? HOT 2
- Preprocess problem HOT 1
- TypeError: iteration over a 0-d tensor HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch_gbw_lm.