当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning lightweight super-resolution networks with weight pruning
Neural Networks ( IF 6.0 ) Pub Date : 2021-08-13 , DOI: 10.1016/j.neunet.2021.08.002
Xinrui Jiang 1 , Nannan Wang 1 , Jingwei Xin 1 , Xiaobo Xia 2 , Xi Yang 1 , Xinbo Gao 3
Affiliation  

Single image super-resolution (SISR) has achieved significant performance improvements due to the deep convolutional neural networks (CNN). However, the deep learning-based method is computationally intensive and memory demanding, which limit its practical deployment, especially for mobile devices. Focusing on this issue, in this paper, we present a novel approach to compress SR networks by weight pruning. To achieve this goal, firstly, we explore a progressive optimization method to gradually zero out the redundant parameters. Then, we construct a sparse-aware attention module by exploring a pruning-based well-suited attention strategy. Finally, we propose an information multi-slicing network which extracts and integrates multi-scale features at a granular level to acquire a more lightweight and accurate SR network. Extensive experiments reflect the pruning method could reduce the model size without a noticeable drop in performance, making it possible to apply the start-of-the-art SR models in the real-world applications. Furthermore, our proposed pruning versions could achieve better accuracy and visual improvements than state-of-the-art methods.



中文翻译:

使用权重修剪学习轻量级超分辨率网络

由于深度卷积神经网络 (CNN),单图像超分辨率 (SISR) 取得了显着的性能提升。然而,基于深度学习的方法是计算密集型和内存要求的,这限制了它的实际部署,特别是对于移动设备。针对这个问题,在本文中,我们提出了一种通过权重修剪来压缩 SR 网络的新方法。为了实现这一目标,我们首先探索了一种渐进式优化方法,将冗余参数逐渐归零。然后,我们通过探索基于剪枝的非常适合的注意力策略来构建一个稀疏感知注意力模块。最后,我们提出了一种信息多切片网络,该网络在粒度级别提取和集成多尺度特征,以获得更轻量级和更准确的 SR 网络。大量实验表明,剪枝方法可以减小模型大小而不会显着降低性能,从而可以在实际应用中应用最先进的 SR 模型。此外,我们提出的剪枝版本可以实现比最先进的方法更好的准确性和视觉改进。

更新日期:2021-08-24
down
wechat
bug