当前位置: X-MOL 学术Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
(SARN)spatial-wise attention residual network for image super-resolution
The Visual Computer ( IF 3.5 ) Pub Date : 2020-07-10 , DOI: 10.1007/s00371-020-01903-8
Wenling Shi , Huiqian Du , Wenbo Mei , Zhifeng Ma

Recent research suggests that attention mechanism is capable of improving performance of deep learning-based single image super-resolution (SISR) methods. In this work, we propose a deep spatial-wise attention residual network (SARN) for SISR. Specifically, we propose a novel spatial attention block (SAB) to rescale pixel-wise features by explicitly modeling interdependencies between pixels on each feature map, encoding where (i.e., attentive spatial pixels in feature map) the visual attention is located. A modified patch-based non-local block can be inserted in SAB to capture long-distance spatial contextual information and relax the local neighborhood constraint. Furthermore, we design a bottleneck spatial attention module to widen the network so that more information is allowed to pass. Meanwhile, we adopt local and global residual connections in SISR to make the network focus on learning valuable high-frequency information. Extensive experiments show the superiority of the proposed SARN over the state-of-art methods on benchmark datasets in both accuracy and visual quality.

中文翻译:

(SARN)用于图像超分辨率的空间注意残差网络

最近的研究表明,注意力机制能够提高基于深度学习的单图像超分辨率 (SISR) 方法的性能。在这项工作中,我们为 SISR 提出了一个深度空间方向注意残差网络(SARN)。具体来说,我们提出了一种新的空间注意块 (SAB),通过显式地对每个特征图上的像素之间的相互依赖性进行建模,对视觉注意所在的位置(即,特征图中的注意空间像素)进行编码来重新调整像素级特征。可以在 SAB 中插入修改后的基于补丁的非局部块以捕获长距离空间上下文信息并放松局部邻域约束。此外,我们设计了一个瓶颈空间注意模块来加宽网络,以便允许更多的信息通过。同时,我们在 SISR 中采用局部和全局残差连接,使网络专注于学习有价值的高频信息。大量实验表明,所提出的 SARN 在准确性和视觉质量方面均优于基准数据集上的最新方法。
更新日期:2020-07-10
down
wechat
bug