Git Product home page Git Product logo

lfa's Introduction

Local function approximation (LFA) framework

This repository contains code to reproduce results in our NeurIPS 2022 publication "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations".

Summary

Under the local function approximation (LFA) framework, explanations perform local function approximation of a complex model over a local neighbourhood using a simple model based on a loss function. The LFA framework unifies eight diverse popular post hoc explanation methods (i.e., LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, SmoothGrad, Gradient x Input, and Integrated Gradients). Using the LFA framework, we show that no single explanation method can perform optimally over every local neighbourhood, calling for a principle approach to select among methods. To select among methods, we set forth a guiding principle, deeming a method to be effective if it performs faithful LFA. Using the LFA framework, we determine the conditions under which each existing explanation methods are effective. If, in a given situation, no existing method is effective, the LFA framework also provides a way to design novel methods (by specifying an appropriate model class, local neighborhood, and loss function) that are tailored to the given situation and that satisfy the guiding principle.

Usage

To reproduce results, navigate into the repository follow the steps below.

1. Generate explanations for individual model predictions

  • Run $ python experiments/generate_explanations.py.
  • Explanations are generated to explain the individual model predictions of each model (four regression models and four classification models) using each explanation method (LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients). Each explanation is computed using two approaches: the existing approach (implemented by the Captum library) and the LFA framework (implemented in the lfa folder).
  • Explanations use 1000 perturbations per data point. If running on a local machine, use a smaller number of perturbations by changing line 165 (n_perturbs_list = [1000]) in experiments/generate_explanations.py
  • Explanations are saved in experiments/results.

2. Analyze explanations

  • Run $ python analysis/analyze_explanations.py.
  • Figures are saved in analysis/figures. These are the figures that appear in the paper.

Citation

@inproceedings{lfa2022,
    title={Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations},
    author={Han, Tessa, and Srinivas, Suraj, and Lakkaraju, Himabindu},
    booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
    year={2022}
}

lfa's People

Contributors

augustebaum avatar th789 avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.