Git Product home page Git Product logo

vimp's Introduction

R/vimp: inference on algorithm-agnostic variable importance

CRAN_Status_Badge Travis-CI Build Status AppVeyor Build Status Coverage status CRAN downloads Project Status: Active – The project has reached a stable, usable state and is being actively developed. License: MIT

Software authors: Brian Williamson, Jean Feng, and Charlie Wolock

Methodology authors: Brian Williamson, Peter Gilbert, Noah Simon, Marco Carone, Jean Feng

Python package: https://github.com/bdwilliamson/vimpy

Introduction

In predictive modeling applications, it is often of interest to determine the relative contribution of subsets of features in explaining an outcome; this is often called variable importance. It is useful to consider variable importance as a function of the unknown, underlying data-generating mechanism rather than the specific predictive algorithm used to fit the data. This package provides functions that, given fitted values from predictive algorithms, compute algorithm-agnostic estimates of population variable importance, along with asymptotically valid confidence intervals for the true importance and hypothesis tests of the null hypothesis of zero importance.

Specifically, the types of variable importance supported by vimp include: difference in population classification accuracy, difference in population area under the receiver operating characteristic curve, difference in population deviance, difference in population R-squared.

More detail may be found in our papers on R-squared-based variable importance, general variable importance, and general Shapley-based variable importance.

This method works on low-dimensional and high-dimensional data.

Issues

If you encounter any bugs or have any specific feature requests, please file an issue.

R installation

You may install a stable release of vimp from CRAN via install.packages("vimp"). You may also install a stable release of vimp from GitHub via devtools by running the following code (replace v2.1.0 with the tag for the specific release you wish to install):

## install.packages("devtools") # only run this line if necessary
devtools::install_github(repo = "bdwilliamson/[email protected]")

You may install a development release of vimp from GitHub via devtools by running the following code:

## install.packages("devtools") # only run this line if necessary
devtools::install_github(repo = "bdwilliamson/vimp")

Example

This example shows how to use vimp in a simple setting with simulated data, using SuperLearner to estimate the conditional mean functions and specifying the importance measure of interest as the R-squared-based measure. For more examples and detailed explanation, please see the vignette.

# load required functions and libraries
library("SuperLearner")
library("vimp")
library("xgboost")
library("glmnet")

# -------------------------------------------------------------
# problem setup
# -------------------------------------------------------------
# set up the data
n <- 100
p <- 2
s <- 1 # desire importance for X_1
x <- as.data.frame(replicate(p, runif(n, -1, 1)))
y <- (x[,1])^2*(x[,1]+7/5) + (25/9)*(x[,2])^2 + rnorm(n, 0, 1)

# -------------------------------------------------------------
# get variable importance!
# -------------------------------------------------------------
# set up the learner library, consisting of the mean, boosted trees,
# elastic net, and random forest
learner.lib <- c("SL.mean", "SL.xgboost", "SL.glmnet", "SL.randomForest")
# get the variable importance estimate, SE, and CI
# I'm using only 2 cross-validation folds to make things run quickly; in practice, you should use more
set.seed(20231213)
vimp <- vimp_rsquared(Y = y, X = x, indx = 1, V = 2)

Citation

After using the vimp package, please cite the following (for R-squared-based variable importance):

    @article{williamson2020,
      author={Williamson, BD and Gilbert, PB and Carone, M and Simon, R},
      title={Nonparametric variable importance assessment using machine learning techniques},
      journal={Biometrics},
      year={2020},
      doi={10.1111/biom.13392}
    }

or the following (for general variable importance parameters):

  @article{williamson2021,
    author={Williamson, BD and Gilbert, PB and Simon, NR and Carone, M},
    title={A general framework for inference on algorithm-agnostic variable importance},
    journal={Journal of the American Statistical Association},
    year={2021},
    doi={10.1080/01621459.2021.2003200}
  }

or the following (for Shapley-based variable importance):

  @inproceedings{williamson2020,
    title={Efficient nonparametric statistical inference on population feature importance using {S}hapley values},
    author={Williamson, BD and Feng, J},
    year={2020},
    booktitle={Proceedings of the 37th International Conference on Machine Learning},
    volume={119},
    pages={10282--10291},
    series = {Proceedings of Machine Learning Research},
    URL = {http://proceedings.mlr.press/v119/williamson20a.html}
}

License

The contents of this repository are distributed under the MIT license. See below for details:

MIT License

Copyright (c) [2018-present] [Brian D. Williamson]

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Logo

The logo was created using hexSticker and lisa. Many thanks to the maintainers of these packages and the Color Lisa team.

vimp's People

Contributors

bdwilliamson avatar cwolock avatar jjfeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

vimp's Issues

Debug option for sp_vim

Option to print status to console during fitting (so that a user can track progress).

Easiest fix is a bunch of print statements:

  • flag "verbose"
  • print start of each model fit (and what I'm fitting)
  • print when I'm starting WLS
  • pass "verbose" option to SuperLearner control

More ambitious: a status bar

Population weights in VIMP

Hi all,

I'm using this package for a project of my own, and I really appreciate you working on it. My question is whether VIMP supports population weights.

For an example, suppose that I take a stratified sample from some population of units (where I know their true sample probabilities). It would be very straightforward to estimate the variable importance in the sample based on this data. Of course, I actually care about the population-level variable importance, which this would not be fully informative for. If I were estimating a mean (or a linear model), I would simply add weights equal to the inverse of my sampling probabilities and have unbiased estimated of my population quantities. Is this supported by VIMP out of the box? Is the method even amenable to this?

I was wondering whether the ipc_weights argument to cv_vim might be used for this purpose, but it seems like this is specific to coarsening (and that the argument may not even be used if C is always equal to one.

Thank you for your time and effort on VIMP!

Allow for pre-computed IPW weights

This adds functionality requested in #14, where population weights (or pre-estimated weights) are available. Involves adding a new argument to vim and cv_vim and updating calls to, e.g., est_predictiveness_cv.

`cv_vim` arguments `cross_fitted_f1` and `cross_fitted_f2` are swapped

Hi again, I have a further issue in working with VIMP :)

One seems to be a straightforward typo somewhere in the cv_vim function in which the arguments cross_fitted_f2 and cross_fitted_f1 are swapped. The second may just be a bug in my thinking about your method.

For both, here is a repro on simulated data:

library(vimp)
library(purrr)
library(dplyr)
library(estimatr)

set.seed(100)
n <- 1000
df <- dplyr::tibble(
    id = 1:n,
    x1 = rnorm(n),
    x2 = rnorm(n),
    x3 = runif(n),
    y = x1 + 0.25 * x3 + rnorm(n),
    split_id = sample(10, n, replace = TRUE)
)

x_df <- select(df, x1, x2, x3)

full_lm <- lm_robust(y ~ x1 + x2 + x3, df)
print(summary(full_lm))
Call:
lm_robust(formula = y ~ x1 + x2 + x3, data = df)

Standard error type:  HC2 

Coefficients:
            Estimate Std. Error t value   Pr(>|t|) CI Lower CI Upper  DF
(Intercept) -0.08031    0.06218 -1.2914  1.969e-01 -0.20233  0.04172 996
x1           1.02250    0.03096 33.0289 3.661e-162  0.96175  1.08325 996
x2          -0.01543    0.03340 -0.4619  6.442e-01 -0.08098  0.05012 996
x3           0.37734    0.10817  3.4882  5.075e-04  0.16506  0.58961 996

Multiple R-squared:  0.5237 ,   Adjusted R-squared:  0.5223 
F-statistic: 363.7 on 3 and 996 DF,  p-value: < 2.2e-16

As expected, x1 and x3 are nonzero and x2 is null. The data is very informative about these relationships.

validRows <- purrr::map(unique(df$split_id), ~which(.x == df$split_id))
cv_ctl <- SuperLearner::SuperLearner.CV.control(V = length(validRows), validRows = validRows)

full_fit <- SuperLearner::CV.SuperLearner(
    Y = df$y,
    X = x_df,
    SL.library = c("SL.glm"),
    cvControl = cv_ctl
)

df$full_preds <- full_fit$SL.predict

df %>%
group_by(split_id) %>%
summarize(rsq = 1 - mean((y - full_preds)^2) / var(y)) -> rsq_full

results <- list()
for (cov in names(x_df)) {
    idx <- which(names(x_df) == cov)
    red_fit <- SuperLearner::CV.SuperLearner(
        Y = full_fit$SL.predict,
        X = x_df[, -idx, drop = FALSE],
        SL.library = c("SL.glm"),
        cvControl = cv_ctl
    )

    df$red_preds <- red_fit$SL.predict

    df %>% 
    group_by(split_id) %>%
    summarize(rsq = 1 - mean((y - red_preds)^2) / var(y)) -> rsq_reduced

    result <- vimp::cv_vim(
        Y = df$y,
        type = "r_squared",
        cross_fitted_f1 = full_fit$SL.predict,
        cross_fitted_f2 = red_fit$SL.predict,
        SL.library = c("SL.glm"),
        cross_fitting_folds = df$split_id,
        run_regression = FALSE,
        scale_est = TRUE,
        sample_splitting = FALSE
    )
    vimp_rough <- inner_join(rsq_full, rsq_reduced, by = "split_id") %>% summarize(rsq=mean(rsq.x - rsq.y))
    results[[cov]] <- mutate(result$mat, term = cov, rough_vimp = unlist(vimp_rough))
}

bind_rows(!!!results)

This gives the result:

# A tibble: 3 × 9
  s        est      se     cil     ciu test   p_value term  rough_vimp
  <chr>  <dbl>   <dbl>   <dbl>   <dbl> <lgl>    <dbl> <chr>      <dbl>
1 1     0.184  0.0275  0.130   0.238   TRUE  1.20e-11 x1      0.507   
2 1     0      0.00115 0       0.00225 FALSE 1.00e+ 0 x2     -0.000690
3 1     0.0202 0.00532 0.00978 0.0306  TRUE  7.22e- 5 x3      0.00317

That is, x1 and x3 are estimated to have zero importance.

I get the estimates that are appropriately non-zero for x1 and x2 when I swap the arguments cross_fitted_f1 and cross_fitted_f2 in the call to cv_vim and rerun the same code:

# A tibble: 3 × 9
  s        est      se     cil     ciu test   p_value term  rough_vimp
  <chr>  <dbl>   <dbl>   <dbl>   <dbl> <lgl>    <dbl> <chr>      <dbl>
1 1     0.184  0.0275  0.130   0.238   TRUE  1.20e-11 x1      0.507   
2 1     0      0.00115 0       0.00225 FALSE 1.00e+ 0 x2     -0.000690
3 1     0.0202 0.00532 0.00978 0.0306  TRUE  7.22e- 5 x3      0.00317

This seems like a simple case of having the two arguments swapped somewhere in your code (this can also be verified by setting scale_est = FALSE). I imagine this popped up at some point as you were adding functionality. Perhaps this would be worth a test-case to make sure this doesn't happen in the future?

I'm additionally a bit confused about why the "rough_vimp" measure doesn't more closely align with the results from cv_vim. My reading of your papers suggests that when I'm not doing sample splitting, the estimator is (essentially) the average of the difference in R^2 within splits like this. I'm specifically thinking about Algorithm 2 in your Biometrics and in your preprint (https://arxiv.org/abs/2004.03683). This also seems to align with your more specific discussion of Example 1 in Section 9 of the preprint. Am I missing something obvious here?

vimp_regression fails when all covariates are specified in "indx" and run_regression = TRUE

When all covariates are specified in "indx" (i.e., when importance of the full group of measured covariates is desired), vimp_regression with run_regression = TRUE fails.

This is essentially because an internal call to SuperLearner::SuperLearner tries to run a regression with no covariates.

For now, a quick (and equivalent) fix is to run the regressions outside of vimp_regression, and use vimp_regression with run_regression = FALSE.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.