Pruning by explaining: A novel criterion for deep neural network pruning

https://doi.org/10.1016/j.patcog.2021.107899Get rights and content
Under a Creative Commons license
open access

Highlights

  • A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is introduced.

  • The method is inspired by neural network interpretability: Layer-wise Relevance Propagation.

  • This is the first report to link the two disconnected lines of interpretability and model compression research.

  • The method is tested on two popular convolutional neural network families and a broad range of benchmark datasets under two different scenarios.

Abstract

The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning.

Keywords

Pruning
Layer-wise relevance propagation (LRP)
Convolutional neural network (CNN)
Interpretation of models
Explainable AI (XAI)

Cited by (0)

Seul-Ki Yeom received a Ph.D. degree in Brain-Computer Interfacing from Korea University, in 2018. From 2018 to 2020, he was associated to the Machine Learning Group at Technische Universität Berlin. Since 2020, Seul-Ki holds a position as Senior Research Engineer at Nota.ai. His research interests include brain-computer interface, machine learning, and model compression.

Philipp Seegerer received a M.Sc. degree in Medical Image and Data Processing from Friedrich-Alexander-Universität Erlangen-Nürnberg, in 2017. He is currently a Doctoral Researcher in the Machine Learning Group at Technische Universität Berlin and since 2019 he is associated to Aignostics as a Machine Learning Engineer. His research interests are machine learning and medical image and data analysis, in particular computational pathology.

Sebastian Lapuschkin received an M.Sc. degree in Computer Science in 2013 and a Ph.D. degree from Technische Universität Berlin, in 2018. He is currently the Head of the Explainable AI Group at the Fraunhofer Heinrich Hertz Institute. His research interests are explainability and efficiciency in computer vision, machine learning and data analysis.

Alexander Binder obtained a Dr. rer. nat. degree from Technical University Berlin in 2013. He is currently an Associate Professor in the Institute of Informatics at the University of Oslo. He has been an Assistant Professor at SUTD from 2015 to 2020. His research interests include computer vision, machine learning, explaining non-linear predictions and medical applications.

Simon Wiedemann received a M.Sc. degree in applied mathematics from Technische Universität Berlin, in 2018. He is currently a Research Associate in the Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute. His research interests include information theory and efficient machine learning, in particular compression, efficient inference and training of neural networks.

Klaus-Robert Müller (Ph.D. 92) has been a Professor of computer science at TU Berlin since 2006; co-director Berlin Big Data Center. He won the 1999 Olympus Prize of German Pattern Recognition Society, the 2006 SEL Alcatel Communication Award, and the 2014 Science Prize of Berlin. Since 2012, he is an elected member of the German National Academy of Sciences Leopoldina.

Wojciech Samek received a Diploma degree in Computer Science from Humboldt University Berlin in 2010 and the Ph.D. degree in Machine Learning from Technische Universität Berlin, in 2014. Currently, he directs the Department of Artificial Intelligence at Fraunhofer Heinrich Hertz Institute. His research interests include neural networks, interpretability and federated learning.