当前位置: X-MOL 学术ACM Trans. Multimed. Comput. Commun. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Kernel Attention Network for Single Image Super-Resolution
ACM Transactions on Multimedia Computing, Communications, and Applications ( IF 5.1 ) Pub Date : 2020-07-06 , DOI: 10.1145/3398685
Dongyang Zhang 1 , Jie Shao 1 , Heng Tao Shen 1
Affiliation  

Recently, attention mechanisms have shown a developing tendency toward convolutional neural network (CNN), and some representative attention mechanisms, i.e., channel attention (CA) and spatial attention (SA) have been fully applied to single image super-resolution (SISR) tasks. However, the existing architectures directly apply these attention mechanisms to SISR without much consideration of the nature characteristic, resulting in less strong representational power. In this article, we propose a novel kernel attention module (KAM) for SISR, which enables the network to adjust its receptive field size corresponding to various scales of input by dynamically selecting the appropriate kernel. Based on this, we stack multiple kernel attention modules with group and residual connection to constitute a novel architecture for SISR, which enables our network to learn more distinguishing representations through filtering the information under different receptive fields. Thus, our network is more sensitive to multi-scale features, which enables our single network to deal with multi-scale SR task by predefining the upscaling modules. Besides, other attention mechanisms in super-resolution are also investigated and illustrated in detail in this article. Thanks to the kernel attention mechanism, the extensive benchmark evaluation shows that our method outperforms the other state-of-the-art methods.

中文翻译:

单图像超分辨率的核注意力网络

最近,注意力机制呈现出向卷积神经网络(CNN)发展的趋势,一些具有代表性的注意力机制,即通道注意力(CA)和空间注意力(SA)已被充分应用于单图像超分辨率(SISR)任务。 . 然而,现有架构直接将这些注意力机制应用于 SISR,而没有过多考虑自然特征,导致表示能力不强。在本文中,我们为 SISR 提出了一种新颖的核注意力模块(KAM),它使网络能够通过动态选择适当的核来调整其接受域大小,以适应各种输入尺度。在此基础上,我们将多个带有组和残差连接的内核注意力模块堆叠起来,构成了一个新的 SISR 架构,这使我们的网络能够通过过滤不同感受野下的信息来学习更多有区别的表示。因此,我们的网络对多尺度特征更敏感,这使我们的单个网络能够通过预定义升级模块来处理多尺度 SR 任务。此外,本文还详细研究和说明了超分辨率中的其他注意机制。由于内核注意力机制,广泛的基准评估表明我们的方法优于其他最先进的方法。此外,本文还详细研究和说明了超分辨率中的其他注意机制。由于内核注意力机制,广泛的基准评估表明我们的方法优于其他最先进的方法。此外,本文还详细研究和说明了超分辨率中的其他注意机制。由于内核注意力机制,广泛的基准评估表明我们的方法优于其他最先进的方法。
更新日期:2020-07-06
down
wechat
bug