Comments (3)
I think I misunderstood your question at first and typed out a response that does not answer it. However, I am going to post it as a reference for people who may come across this and wonder why the dimension of the $.result
is so much larger in the second code snippet.
What is happening here is that resample()
runs the Graph twice, once for training and once for predicting / inference on the "test" set. Since .result
always contains the result from the last invocation of the Graph, you are seeing the classbalancing output of the $predict()
call. Note that, during prediction, PipeOpClassBalancing
does not modify the data at all, since the whole pipeline is required to make one prediction for each input sample. Removing samples during prediction would break this.
If you want to see the state of the Graph after training but before prediction in a resampling iteration, you can currently
debugonce(mlr3:::workhorse)
and then run your resample()
call and step until after the line learner = learner_train(learner$clone(), ....
. Here you will notice that learner$graph$pipeops$classbalancing$.result$output
has around 491 rows (0.7 * 702), but with some random variation, since this depends on the number of minor class samples that make it into the training set.
I see that this workaround is a bit tedious, and I will think about making this more convenient in #730.
from mlr3pipelines.
To get at your question,
Is there a way to do ensure the downsampling is done before resampling? One of the purposes of also doing the resampling is to consider the effect of the downsampling.
You could call the downsampling PipeOp manually, e.g. using
task_lr_down <- po("classbalancing", ...)$train(list(task_lr))[[1]]
and then call resample()
with that. We are thinking about making this invocation more convenient in the future.
However, make sure you know what you are doing here, methodology-wise! (I am not sure how aware you are of the following, so if you know this then sorry for wasting your time). You use resampling to try and estimate the performance of a machine learning training-inference-workflow (consisting of preprocessing like subsampling, feature encoding etc., and finally fitting + inference of the "Learner"-model) on a given dataset. Subsampling the data and then running "resample()" would therefore answer the question of "what if I ran my method in a world where the data was more evenly balanced", not "how does subsampling influence the performance of my method". E.g. if you measure accuracy and use the lrn("classif.featureless")
(majority prediction), you will get a an accuracy of (I think) around 0.999954, using your resample()
call above with the extremely imbalanced dataset. If you create task_lr_down
and resample on that ("downsampling before resampling"), your resulting accuracy will be around 0.833... . I would call the different numbers not the "effect of downsampling", but the "effect of having more or less imbalanced data" (simulated through downsampling). Methodologically things can get much worse when you do other preprocessing operations "before" resampling (e.g. imputation), since these risk leaking information about your test set into the training set. It can give you a severely optimistic bias.
from mlr3pipelines.
Thank you for the detailed description of what is going on and the meaning of .result
- that makes a lot more sense now!
I hadn't thought about performing the pipe operation beforehand on the task - that is a great solution. In the mean time I found a work around by performing downsampling outside mlr3 and generating a ResampleResult using as_result_data()
- but the solution described above is much cleaner
And thank you for the comments and suggestions regarding methodology - especially with imbalanced datasets (as you showed) very important to keep in mind throughout.
from mlr3pipelines.
Related Issues (20)
- bagging pipeline ideally should do "real" bagging by default, with frac = 1, replace = TRUE
- Question: behavior of pipe operations with 'col_roles' tags in tasks HOT 2
- .result for train / test separately, or for resampling instances
- pipeop$predict_newdata() functionality
- Graph should support in-place operations
- `PipeOpUpdateTarget` doesn't work for survival tasks with 2 target columns
- use encapsulate
- PipeOpFeatureUnion drops columns names "x" HOT 1
- Change id of "featureunion"either "stacking" or "robustifuy" ppl to make example in the book work.
- crate all custom_check functions HOT 1
- $add_pipeop() should have 'clone' argument. HOT 1
- Typo in documentation for `ppl()` HOT 1
- dependencies in PSCs
- Allow other packages to use mlr3pipelines test helpers HOT 1
- `PipeOpEncode` doesn't work for features of type `character()` HOT 2
- strange error messages in online doc
- Support for UMAP: Uniform Manifold Approximation and Projection
- Support for tSNE: t-Distributed Stochastic Neighbour Embeddings
- Support for metric MDS
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mlr3pipelines.