The presentation draws on the work proposed on a 2017 paper Interpretable Explanations of Black Boxes by Meaningful Perturbation by Ruth Fong & Andrea Vidaldi.
Repo for Paper: https://github.com/ruthcfong/perturb_explanations
- ๐ Simonyan, K., Vedaldi, A., & Zisserman, A. (2014). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. ArXiv:1312.6034 [Cs]. http://arxiv.org/abs/1312.6034
- ๐บ How Deep Neural Networks Work
- ๐ Activation Functions in Neural Networks
- ๐ What is Meta-Learning in Machine Learning
- ๐ Understanding Neural Networks: From Activation Function To Back Propagation
- ๐ Understanding Neural Networks
- ๐บ Introduction to Optimization: Gradient Based Algorithms
- ๐ Molnar, Christoph. โInterpretable machine learning. A Guide for Making Black Box Models Explainableโ, 2019, Chapter 10. https://christophm.github.io/interpretable-ml-book/.
- ๐ CNN Heat Maps: Gradients vs. DeconvNets vs. Guided Backpropagation
Aknowledgment: @jacobgil for the pytorch implementation of this paper's framework.