当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2021-02-20 , DOI: 10.1109/tip.2021.3058764
Xin Deng , Yutong Zhang , Mai Xu , Shuhang Gu , Yiping Duan

Nowadays, people are getting used to taking photos to record their daily life, however, the photos are actually not consistent with the real natural scenes. The two main differences are that the photos tend to have low dynamic range (LDR) and low resolution (LR), due to the inherent imaging limitations of cameras. The multi-exposure image fusion (MEF) and image super-resolution (SR) are two widely-used techniques to address these two issues. However, they are usually treated as independent researches. In this paper, we propose a deep Coupled Feedback Network (CF-Net) to achieve MEF and SR simultaneously. Given a pair of extremely over-exposed and under-exposed LDR images with low-resolution, our CF-Net is able to generate an image with both high dynamic range (HDR) and high-resolution. Specifically, the CF-Net is composed of two coupled recursive sub-networks, with LR over-exposed and under-exposed images as inputs, respectively. Each sub-network consists of one feature extraction block (FEB), one super-resolution block (SRB) and several coupled feedback blocks (CFB). The FEB and SRB are to extract high-level features from the input LDR image, which are required to be helpful for resolution enhancement. The CFB is arranged after SRB, and its role is to absorb the learned features from the SRBs of the two sub-networks, so that it can produce a high-resolution HDR image. We have a series of CFBs in order to progressively refine the fused high-resolution HDR image. Extensive experimental results show that our CF-Net drastically outperforms other state-of-the-art methods in terms of both SR accuracy and fusion performance. The software code is available here https://github.com/ytZhang99/CF-Net.

中文翻译:


用于联合曝光融合和图像超分辨率的深度耦合反馈网络



如今,人们越来越习惯用照片来记录日常生活,但照片实际上与真实的自然场景并不相符。两个主要区别是,由于相机固有的成像限制,照片往往具有低动态范围 (LDR) 和低分辨率 (LR)。多重曝光图像融合(MEF)和图像超分辨率(SR)是解决这两个问题的两种广泛使用的技术。然而,它们通常被视为独立研究。在本文中,我们提出了一种深度耦合反馈网络(CF-Net)来同时实现 MEF 和 SR。给定一对极度过度曝光和曝光不足的低分辨率 LDR 图像,我们的 CF-Net 能够生成具有高动态范围 (HDR) 和高分辨率的图像。具体来说,CF-Net 由两个耦合的递归子网络组成,分别以 LR 过度曝光和曝光不足的图像作为输入。每个子网络由一个特征提取块(FEB)、一个超分辨率块(SRB)和多个耦合反馈块(CFB)组成。 FEB和SRB是从输入的LDR图像中提取高级特征,这有助于分辨率增强。 CFB布置在SRB之后,其作用是吸收从两个子网络的SRB中学习到的特征,从而能够产生高分辨率的HDR图像。我们有一系列 CFB 来逐步细化融合的高分辨率 HDR 图像。大量的实验结果表明,我们的 CF-Net 在 SR 精度和融合性能方面都远远优于其他最先进的方法。软件代码可在此处获取:https://github.com/ytZhang99/CF-Net。
更新日期:2021-02-20
down
wechat
bug