当前位置: X-MOL 学术IEEE Trans. Geosci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reinforced Swin-Convs Transformer for Simultaneous Underwater Sensing Scene Image Enhancement and Super-resolution
IEEE Transactions on Geoscience and Remote Sensing ( IF 7.5 ) Pub Date : 9-8-2022 , DOI: 10.1109/tgrs.2022.3205061
Tingdi Ren 1 , Haiyong Xu 1 , Gangyi Jiang 2 , Mei Yu 2 , Xuan Zhang 1 , Biao Wang 1 , Ting Luo 2
Affiliation  

Underwater image enhancement (UIE) technology aims to tackle the challenge of restoring the degraded underwater images due to light absorption and scattering. Meanwhile, the ever-increasing requirement for higher resolution images from a lower resolution in the underwater domain cannot be overlooked. To address these problems, a novel U-Net-based reinforced Swin-Convs Transformer for simultaneous enhancement and superresolution (URSCT-SESR) method is proposed. Specifically, with the deficiency of U-Net based on pure convolutions, the Swin Transformer is embedded into U-Net for improving the ability to capture the global dependence. Then, given the inadequacy of the Swin Transformer capturing the local attention, the reintroduction of convolutions may capture more local attention. Thus, an ingenious manner is presented for the fusion of convolutions and the core attention mechanism to build a reinforced Swin-Convs Transformer block (RSCTB) for capturing more local attention, which is reinforced in the channel and the spatial attention of the Swin Transformer. Finally, experimental results on available datasets demonstrate that the proposed URSCT-SESR achieves the state-of-the-art performance compared with other methods in terms of both subjective and objective evaluations. The code is publicly available at https://github.com/TingdiRen/URSCT-SESR.

中文翻译:


用于同步水下传感场景图像增强和超分辨率的增强型 Swin-Convs 变压器



水下图像增强(UIE)技术旨在解决恢复由于光吸收和散射而退化的水下图像的挑战。与此同时,水下领域对低分辨率图像的高分辨率需求不断增长,这一点也不容忽视。为了解决这些问题,提出了一种基于 U-Net 的新型增强 Swin-Convs Transformer,用于同时增强和超分辨率(URSCT-SESR)方法。具体来说,针对基于纯卷积的U-Net的不足,将Swin Transformer嵌入到U-Net中以提高捕获全局依赖的能力。然后,考虑到 Swin Transformer 捕获局部注意力的不足,重新引入卷积可能会捕获更多的局部注意力。因此,提出了一种巧妙的方式来融合卷积和核心注意力机制,以构建增强的 Swin-Convs Transformer 块(RSCTB)来捕获更多的局部注意力,这在 Swin Transformer 的通道和空间注意力中得到了增强。最后,可用数据集上的实验结果表明,与其他方法相比,所提出的 URSCT-SESR 在主观和客观评估方面均实现了最先进的性能。该代码可在 https://github.com/TingdiRen/URSCT-SESR 上公开获取。
更新日期:2024-08-28
down
wechat
bug