Comments (8)
Yes, reduced data set size takes less time to generate and train.
from kur.
I would like to add some documentation for this example, but it is ready to try!
You should be able to run
bash steps.sh
This will process the data, show you the data, train, and show you the outputs of the model.
If it doesn't work, please post tracebacks. You can execute the steps one by one to see what's going on and debug.
from kur.
it is working, thanks! I am looking forward to your docs too.
I ran bash steps.sh
, though it works, it seems very slow. for each epoch it estimate over 30 minutes run.
from kur.
In fact, I trained 30 minutes, and could just finish half of an epoch.
In order to make it run faster, I want to sample a small subset of the data, so I added provider
to train and validate sections as below
train:
data:
- jsonl: ../data/train.jsonl
provider:
num_batches: 2
epochs: 1
weights:
initial: inital.w.kur
best: best.w.kur
last: last.w.kur
log: log
validate:
data:
- jsonl: ../data/validate.jsonl
provider:
num_batches: 1
weights:
initial: inital.w.kur
best: best.w.kur
last: last.w.kur
However, loading log still take a long time. Is it normal, why is it? Is there a way for me to train a small subset of data and make it fast for experiment?
Thanks
from kur.
The log
file is probably just large-ish. Shouldn't take long to load, though. Try deleting the log. If you don't want a log, just remove log: log
entirely, or try logging less data with:
log:
path: log
keep_batch: no
from kur.
Hi @noajshu when I train with the default kurfile on mac, it took me 30 minutes to finish only 50% of the first epoch, does it mean it is expected to take about 5 hours to finish training on this example?
If so, I must use aws or a mac with gpu to try this example, is that right?
Is there a way to run this example with mac and cpu within a reasonable time?
Thanks a lot!
from kur.
Yep, it's going to take a long time on CPU. If you go in make_data.py and change dev = True, then recreate the data and train the model, it will go much faster. This will reduce the amount of data you train on by 10x. Your performance may be lower but you should still get ok results (and sensible text if you use this in generative mode)
from kur.
@noajshu Thank you very much! This example is great, I really want to see it in kur.
- with your solution above,
make_data.py
only takes a few seconds kur -v train kurfile.yaml
only takes less than 4 mins, compared to default setting's estimated 5 hours training- Also the previous 30 minutes loading time is gone too
Now, I would like to be clear about what made it take 30 mins to load previously? was it the large dataset? with smaller dataset, loading time reduced?
Thanks!
from kur.
Related Issues (20)
- Weights are not getting saved. HOT 2
- Validation loss diverging in speech example HOT 1
- add support for mxnet
- Truth Data generation Error HOT 5
- How to get text output from evaluate.pkl file ? HOT 1
- Running STT on specific file?
- AttributeError: module 'keras.backend' has no attribute 'set_image_dim_ordering' HOT 1
- RuntimeError: generator raised StopIteration HOT 6
- out of memory error
- How do I transcribe the wav file?
- Validation loss become higher after 20 hours training time HOT 3
- Weights for speech recognition are not restored when again starting the training as loss value climbs back to 1st epoch value i.e 316 instead of starting from reduced loss HOT 4
- Training for speech recognition using KUR on a distributed system. Is it possible? HOT 1
- CUDA_ERROR_NO_DEVICE HOT 1
- Import Voxforge to Kur HOT 1
- speech.yml prediction is empty HOT 2
- failed to find any available GPUs HOT 1
- The speed of training gradually decreases when using gpu HOT 4
- Training a dataset with non-latin characters HOT 1
- Error when training on custom data HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kur.