当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dual-Attention-Guided Network for Ghost-Free High Dynamic Range Imaging
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2021-10-28 , DOI: 10.1007/s11263-021-01535-y
Qingsen Yan 1 , Dong Gong 1 , Javen Qinfeng Shi 1 , Anton van den Hengel 1 , Chunhua Shen 1 , Ian Reid 1 , Yanning Zhang 2
Affiliation  

Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results.



中文翻译:

用于无重影高动态范围成像的双注意力引导网络

由移动物体和错位引起的重影伪影是构建高动态范围 (HDR) 图像的关键挑战。当前的方法首先使用光流注册输入的低动态范围 (LDR) 图像,然后再合并它们。这个过程容易出错,并且经常导致合成图像中出现重影。我们提出了一种新颖的双注意力引导端到端深度神经网络,称为 DAHDRNet,可生成高质量的无重影 HDR 图像。与之前直接堆叠 LDR 图像或特征进行合并的方法不同,我们使用双重注意模块根据参考图像指导合并。因此,DAHDRNet 同时利用空间注意力和特征通道注意力来实现无重影合并。空间注意力模块会自动抑制由错位和饱和引起的不需要的成分,并增强非参考图像中的精细细节。通道注意力模块通过考虑通道之间的相互依赖性来自适应地重新调整通道特征。双重注意方法被反复应用以进一步改进特征表示,从而改进对齐。设计了扩张的残差密集块以充分利用层次特征并在幻觉缺失细节时增加感受野。我们采用混合损失函数,它由感知损失、总变化损失和内容损失组成,以恢复照片般逼真的图像。虽然 DAHDRNet 不是基于流的,但它可以应用于基于流的配准,以减少由光流估计错误引起的伪影。

更新日期:2021-10-29
down
wechat
bug