Materials for my talk at Datathon 2019 (Minsk)
keywords: black box, interpretability, LIME, SHAP, ELI5, InterpretML, local explanation, feature importance
We discuss the problem of interpreting “black boxes”, models that are commonly used to solve a wide range of applied problems. We talk about importance of this topic, what methods and tools exist, how to use them when solving problems of supervised learning (customer churn prediction), and what strengths/weaknesses these approaches have.
Slides: PDF
Code: Notebook | HTML(with outputs)
If you use conda
:
- create and activate your working environment:
conda create --name myenv python
conda activate myenv
conda install nb_conda # to use env with jupyter
conda install -c conda-forge ipywidgets # add widgets
- install required packages:
pip install -r requirements.txt