当前位置: X-MOL 学术Signal Process. Image Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A general model compression method for image restoration network
Signal Processing: Image Communication ( IF 3.5 ) Pub Date : 2021-01-09 , DOI: 10.1016/j.image.2021.116134
Jie Xiao , Zhi Jin , Huanrong Zhang

Convolutional neural networks have achieved prominent performance in the image restoration field at the cost of massive network parameters and computations. Several model compression methods have been proposed, however, most of them are designed for high-level vision tasks, which originally have some tolerance for information loss. To make image restoration networks targeting for low-level vision tasks more compact and efficient while preserving comparable good performance, we propose a general model compression method. More specifically, a deformable convolution kernel and standard convolution factorization are proposed to compress the network. Then, symmetric dilated convolutions and attention mechanism are employed to compensate for the performance loss induced by former compression. The process can be regarded as a micro-to-macro network rebuilding. Extensive experiments conducted in three typical image restoration tasks demonstrate that the proposed method attains up to 8× network compression ratio while achieving comparable or even better performance compared to the original network.



中文翻译:

图像复原网络的通用模型压缩方法

卷积神经网络以大量的网络参数和计算为代价,在图像恢复领域取得了突出的性能。已经提出了几种模型压缩方法,但是,大多数方法是为高级视觉任务设计的,该方法最初对信息丢失具有一定的容忍度。为了使针对低视力任务的图像恢复网络更加紧凑和高效,同时保持可比的良好性能,我们提出了一种通用的模型压缩方法。更具体地说,提出了可变形卷积核和标准卷积因式分解来压缩网络。然后,采用对称的扩张卷积和注意力机制来补偿先前压缩引起的性能损失。该过程可以看作是微宏网络的重建。在三个典型的图像恢复任务中进行的大量实验表明,该方法可达到8× 网络压缩率,同时达到与原始网络相当甚至更好的性能。

更新日期:2021-01-11
down
wechat
bug