当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-03-04 , DOI: arxiv-2103.03014
Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus

Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks. Starting from a pre-trained network, the process is as follows: remove redundant parameters, retrain, and repeat while maintaining the same test accuracy. The result is a model that is a fraction of the size of the original with comparable predictive performance (test accuracy). Here, we reassess and evaluate whether the use of test accuracy alone in the terminating condition is sufficient to ensure that the resulting model performs well across a wide spectrum of "harder" metrics such as generalization to out-of-distribution data and resilience to noise. Across evaluations on varying architectures and data sets, we find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate performance varies significantly across tasks. These results call into question the extent of \emph{genuine} overparameterization in deep learning and raise concerns about the practicability of deploying pruned networks, specifically in the context of safety-critical systems, unless they are widely evaluated beyond test accuracy to reliably predict their performance. Our code is available at https://github.com/lucaslie/torchprune.

中文翻译:

修剪中丢失:修剪神经网络的影响超出测试精度

神经网络修剪是一种流行的技术,用于减少现代的,可能参数过大的网络的推理成本。从预先训练的网络开始,过程如下:删除冗余参数,重新训练并重复,同时保持相同的测试精度。结果是一个模型,该模型的大小仅为原始文件的一小部分,具有可比较的预测性能(测试准确性)。在这里,我们重新评估并评估在终止条件下仅使用测试准确性是否足以确保所生成的模型在广泛的“较难”指标(例如对分布失常数据的泛化以及对噪声的适应性)上的性能良好。在对各种体系结构和数据集的评估中,我们发现经过修剪的网络有效地近似了未经修剪的模型,但是,修剪后的网络达到相应性能的修剪率在各个任务中差异很大。这些结果令人质疑深度学习中\ emph {genuine}的过度参数化程度,并引起人们对部署修剪网络的实用性的关注,特别是在对安全至关重要的系统的情况下,除非对它们进行广泛评估以超出测试精度以可靠地预测其安全性。表现。我们的代码可从https://github.com/lucaslie/torchprune获得。除非对它们进行了广泛的评估(超出了测试精度),才能可靠地预测其性能。我们的代码可从https://github.com/lucaslie/torchprune获得。除非对它们进行了广泛的评估(超出了测试精度),才能可靠地预测其性能。我们的代码可从https://github.com/lucaslie/torchprune获得。
更新日期:2021-03-05
down
wechat
bug