Git Product home page Git Product logo

Comments (3)

mb706 avatar mb706 commented on June 12, 2024 1

I think I misunderstood your question at first and typed out a response that does not answer it. However, I am going to post it as a reference for people who may come across this and wonder why the dimension of the $.result is so much larger in the second code snippet.


What is happening here is that resample() runs the Graph twice, once for training and once for predicting / inference on the "test" set. Since .result always contains the result from the last invocation of the Graph, you are seeing the classbalancing output of the $predict() call. Note that, during prediction, PipeOpClassBalancing does not modify the data at all, since the whole pipeline is required to make one prediction for each input sample. Removing samples during prediction would break this.

If you want to see the state of the Graph after training but before prediction in a resampling iteration, you can currently

debugonce(mlr3:::workhorse)

and then run your resample() call and step until after the line learner = learner_train(learner$clone(), ..... Here you will notice that learner$graph$pipeops$classbalancing$.result$output has around 491 rows (0.7 * 702), but with some random variation, since this depends on the number of minor class samples that make it into the training set.

I see that this workaround is a bit tedious, and I will think about making this more convenient in #730.

from mlr3pipelines.

mb706 avatar mb706 commented on June 12, 2024 1

To get at your question,

Is there a way to do ensure the downsampling is done before resampling? One of the purposes of also doing the resampling is to consider the effect of the downsampling.

You could call the downsampling PipeOp manually, e.g. using

task_lr_down <- po("classbalancing", ...)$train(list(task_lr))[[1]]

and then call resample() with that. We are thinking about making this invocation more convenient in the future.

However, make sure you know what you are doing here, methodology-wise! (I am not sure how aware you are of the following, so if you know this then sorry for wasting your time). You use resampling to try and estimate the performance of a machine learning training-inference-workflow (consisting of preprocessing like subsampling, feature encoding etc., and finally fitting + inference of the "Learner"-model) on a given dataset. Subsampling the data and then running "resample()" would therefore answer the question of "what if I ran my method in a world where the data was more evenly balanced", not "how does subsampling influence the performance of my method". E.g. if you measure accuracy and use the lrn("classif.featureless") (majority prediction), you will get a an accuracy of (I think) around 0.999954, using your resample() call above with the extremely imbalanced dataset. If you create task_lr_down and resample on that ("downsampling before resampling"), your resulting accuracy will be around 0.833... . I would call the different numbers not the "effect of downsampling", but the "effect of having more or less imbalanced data" (simulated through downsampling). Methodologically things can get much worse when you do other preprocessing operations "before" resampling (e.g. imputation), since these risk leaking information about your test set into the training set. It can give you a severely optimistic bias.

from mlr3pipelines.

jpconnel avatar jpconnel commented on June 12, 2024

Thank you for the detailed description of what is going on and the meaning of .result - that makes a lot more sense now!

I hadn't thought about performing the pipe operation beforehand on the task - that is a great solution. In the mean time I found a work around by performing downsampling outside mlr3 and generating a ResampleResult using as_result_data() - but the solution described above is much cleaner

And thank you for the comments and suggestions regarding methodology - especially with imbalanced datasets (as you showed) very important to keep in mind throughout.

from mlr3pipelines.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.