Comments (3)
To answer your first question: Yes, that's how the two sets are split up. It's perfectly reasonable to override train_test_split
with your own method in a subclass and do whatever you want. Not sure I understand the reasoning behind training on all folds. So you don't want to use a validation set? I guess that's something we could support ourselves. Maybe you can try and see what changes are required to the code to make this happen (say, when eval_size=None
). (Let's open a more specific issue or pull request for this one.)
Regarding the second question: just take a look at the train_history_
attribute; it's all in there.
from nolearn.
Ok - Daniel
When you use KFold, in every iter of the kf, you have a different set of train indices and test indices. I am sure you are with me but just to emphasize, if I am doing 3 fold CV on a 9 element sized array. The train indices from first iter may be 123456 and val indices 789 val. In second iter, the train indices can be 234567 train and 189 will be val indices and so on. If I take only the indices from the first iter, I will never train on some. So a generic approach is to do train and validation on all the iterations on the KFold for each model (parameters), store the validation results for all the KFold iterations, and use statistical measures across different models to compared the validation errors spread across the KFolds. So say if you are doing only on 1 iter of KFold, you might have the val error on one model as 7% and another model at 6%. You choose the second model. But this is only comparing validation on a particular set of the training instances. So - not quite right to compare. Better is to like 10 fold CV. Have 10 val errors for all the 10 iterations and then compare the distribution of the val errors across different models.
I will check the train_history_ attribute - but I guess it will not have the kind of history I just mentioned.
from nolearn.
I see what you mean. So for a proper cross validation I think you want to use sklearn's utlities for that. They will give you a test set that the network will never see, and a train and validation set that the network uses for training and maybe early stopping. So the right thing to do is to evaluate the network on held-out test set that wasn't used to train nor to validate. Thus you'll be able to train however many networks that you want using cross-validation.
Regarding "validation losses in an epoch (across all the folds)", this isn't something that the network can do currently. NeuralNet trains one set of parameters; if you want to train multiple networks, say to do cross-vadliation, the right thing to do is to train multiple NeuralNets (again, check scikit-learn utilities for that). The train_history_
attribute only has validation losses for the single validation set that the single net uses.
from nolearn.
Related Issues (20)
- RememberBestWeights does not honor the verbose parameter HOT 2
- A replayable fit() method - diff/patch attached HOT 1
- remove('trainable') Lasagne's command doesn't work in nolearn HOT 6
- flip_filters and pad parameter not used by NeuralNet's class HOT 5
- OSError: could not read bytes when trying to fetch mldata HOT 2
- CUDA error, possibly related to network size? HOT 2
- Trained on GPU, inference on CPU doesn't make sense
- Install nolearn with Lasagne dependance not working HOT 2
- Bug in calculating average scores
- nolearn is not installing
- Bug when using Lasagne `mask_input` parameter
- 'NeuralNet' object has no attribute 'layers_' HOT 1
- Weights sum up to zero
- Future issue with sklearn.cross_validation
- Dependency on both backends in requirements.txt switches off GPU support HOT 3
- Enable to reproduce the last value of trainning when predicting CNN
- enable to reproduce loss value of training when predicting CNN HOT 1
- python 3 support not working with Lasagne? HOT 12
- TypeError: Failed to instantiate <class 'lasagne.layers.pool.MaxPool2DLayer'> with args {'name': 'pool1', 'ds': (2, 2), 'incoming': <lasagne.layers.conv.Conv2DLayer object at 0x7ff765fa29e8>}. Maybe parameter names have changed?
- nolearn now on conda-forge HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nolearn.