A central place to test model agnositc local interpretable explanation models (taken from following libraries):
- LIME (For LIME),
- Anchor (For Anchor),
- DeepExplain (for gradient based methods)
And give an empirical notion of robustness. (currently maintained for image dataType only)
The dependecies of above libraries.
imagenet.py (Give the explanation and robustness of different methods on imagenet dataset, trained on InceptionV3)
mnist.py (Give the explanation and robustness of different methods on imagenet dataset, trained on NN from sample_nn.py)