当前位置: X-MOL 学术Knowl. Based Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lightweight multi-scale residual networks with attention for image super-resolution
Knowledge-Based Systems ( IF 8.8 ) Pub Date : 2020-06-10 , DOI: 10.1016/j.knosys.2020.106103
Huan Liu , Feilong Cao , Chenglin Wen , Qinghua Zhang

In recent years, constructing various deep convolutional neural networks (CNNs) for single-image super-resolution (SISR) tasks has made significant progress. Despite their high performance, numerous CNNs are limited in practical applications, owing to the requirement of heavy computation. This paper proposes a lightweight network for SISR, known as attention-based multi-scale residual network (AMSRN). In detail, a residual atrous spatial pyramid pooling (ASPP) block as well as a spatial and channel-wise attention residual (SCAR) block is stacked alternately to support the main framework of the entire network. The residual ASPP block utilizes parallel dilated convolutions of different dilation rates to achieve the purpose of capturing multi-scale features. The SCAR block adds the channel attention (CA) and spatial attention (SA) mechanisms based on a double-layer convolution residual block. In addition, group convolution is introduced in the SCAR block to further reduce the parameters while preventing over-fitting. Moreover, a multi-scale feature attention module is designed to provide instructive multi-scale attention information for shallow features. Particularly, we propose a novel upscale module, which adopts dual paths to upscale the features by jointly using sub-pixel convolution and nearest interpolation layers, instead of using deconvolution layer or sub-pixel convolution layer alone. The experimental results demonstrate that our method achieves comparable performance to the state-of-the-art methods, both quantitatively and qualitatively.



中文翻译:

轻量级多尺度残差网络,关注图像超分辨率

近年来,为单图像超分辨率(SISR)任务构建各种深度卷积神经网络(CNN)取得了重大进展。尽管具有高性能,但由于需要大量计算,因此许多CNN在实际应用中受到限制。本文提出了一种用于SISR的轻量级网络,称为基于注意力的多尺度残差网络(AMSRN)。详细地,将残差的无空间金字塔金字塔(ASPP)块以及空间和通道注意的残差(SCAR)块交替堆叠以支持整个网络的主要框架。剩余的ASPP块利用不同膨胀率的平行膨胀卷积来达到捕获多尺度特征的目的。SCAR块基于双层卷积残差块添加了通道注意(CA)和空间注意(SA)机制。另外,在SCAR块中引入了组卷积,以进一步减少参数,同时防止过度拟合。此外,设计了多尺度特征关注模块以为浅层特征提供指导性的多尺度关注信息。特别是,我们提出了一种新颖的高级模块,该模块采用双路径通过联合使用子像素卷积和最近的插值层来缩放特征,而不是单独使用去卷积层或子像素卷积层。实验结果表明,我们的方法在定量和定性方面都可以达到与最新方法相当的性能。另外,在SCAR块中引入了组卷积,以进一步减少参数,同时防止过度拟合。此外,设计了多尺度特征关注模块以为浅层特征提供指导性的多尺度关注信息。特别是,我们提出了一种新颖的高级模块,该模块采用双路径通过联合使用子像素卷积和最近的插值层来缩放特征,而不是单独使用去卷积层或子像素卷积层。实验结果表明,我们的方法在定量和定性方面都可以达到与最新方法相当的性能。另外,在SCAR块中引入了组卷积,以进一步减少参数,同时防止过度拟合。此外,设计了多尺度特征关注模块以为浅层特征提供指导性的多尺度关注信息。特别是,我们提出了一种新颖的高级模块,该模块采用双路径通过联合使用子像素卷积和最近的插值层来缩放特征,而不是单独使用去卷积层或子像素卷积层。实验结果表明,我们的方法在定量和定性方面都可以达到与最新方法相当的性能。多尺度特征注意模块设计用于为浅层特征提供指导性的多尺度注意信息。特别是,我们提出了一种新颖的高级模块,该模块采用双路径通过联合使用子像素卷积和最近的插值层来缩放特征,而不是单独使用去卷积层或子像素卷积层。实验结果表明,我们的方法在定量和定性方面都可以达到与最新方法相当的性能。多尺度特征关注模块设计用于为浅层特征提供指导性的多尺度关注信息。特别是,我们提出了一种新颖的高级模块,该模块采用双路径通过联合使用子像素卷积和最近的插值层来缩放特征,而不是单独使用去卷积层或子像素卷积层。实验结果表明,我们的方法在定量和定性方面都可以达到与最新方法相当的性能。

更新日期:2020-06-10
down
wechat
bug