The notebook hosted in this repository aims to summarize the findings of one of the first papers that sought to leverage deep neural network (DNN) explainability techniques for pruning and quantization.
The paper introduces a pipeline for identifying expendable parameters within a DNN by estimating the degree of influence each parameter has on the network's decisions. This approach signifies a notable advancement in the efficient design of neural networks, by shifting the focus to the understanding of their behavior.
The notebook is structured to guide readers through the key concepts and methodology of the used explainability method, before getting into its application on network optimization. Its purpose is to offer a comprehensive overview without delving into executable code, focusing instead on the theoretical and conceptual underpinnings of the research.
Yeom, S.-K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S., Müller, K.-R., & Samek, W. (2021). Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning. Pattern Recognition, 115, 107899. https://doi.org/10.1016/j.patcog.2021.107899. Preprint available at https://arxiv.org/abs/1912.08881.