Git Product home page Git Product logo

Comments (3)

dnouri avatar dnouri commented on July 20, 2024

To answer your first question: Yes, that's how the two sets are split up. It's perfectly reasonable to override train_test_split with your own method in a subclass and do whatever you want. Not sure I understand the reasoning behind training on all folds. So you don't want to use a validation set? I guess that's something we could support ourselves. Maybe you can try and see what changes are required to the code to make this happen (say, when eval_size=None). (Let's open a more specific issue or pull request for this one.)

Regarding the second question: just take a look at the train_history_ attribute; it's all in there.

from nolearn.

run2 avatar run2 commented on July 20, 2024

Ok - Daniel
When you use KFold, in every iter of the kf, you have a different set of train indices and test indices. I am sure you are with me but just to emphasize, if I am doing 3 fold CV on a 9 element sized array. The train indices from first iter may be 123456 and val indices 789 val. In second iter, the train indices can be 234567 train and 189 will be val indices and so on. If I take only the indices from the first iter, I will never train on some. So a generic approach is to do train and validation on all the iterations on the KFold for each model (parameters), store the validation results for all the KFold iterations, and use statistical measures across different models to compared the validation errors spread across the KFolds. So say if you are doing only on 1 iter of KFold, you might have the val error on one model as 7% and another model at 6%. You choose the second model. But this is only comparing validation on a particular set of the training instances. So - not quite right to compare. Better is to like 10 fold CV. Have 10 val errors for all the 10 iterations and then compare the distribution of the val errors across different models.

I will check the train_history_ attribute - but I guess it will not have the kind of history I just mentioned.

from nolearn.

dnouri avatar dnouri commented on July 20, 2024

I see what you mean. So for a proper cross validation I think you want to use sklearn's utlities for that. They will give you a test set that the network will never see, and a train and validation set that the network uses for training and maybe early stopping. So the right thing to do is to evaluate the network on held-out test set that wasn't used to train nor to validate. Thus you'll be able to train however many networks that you want using cross-validation.

Regarding "validation losses in an epoch (across all the folds)", this isn't something that the network can do currently. NeuralNet trains one set of parameters; if you want to train multiple networks, say to do cross-vadliation, the right thing to do is to train multiple NeuralNets (again, check scikit-learn utilities for that). The train_history_ attribute only has validation losses for the single validation set that the single net uses.

from nolearn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.