当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Guided Learning for Fast Multi-Exposure Image Fusion.
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2019-11-19 , DOI: 10.1109/tip.2019.2952716
Kede Ma , Zhengfang Duanmu , Hanwei Zhu , Yuming Fang , Zhou Wang

We propose a fast multi-exposure image fusion (MEF) method, namely MEF-Net, for static image sequences of arbitrary spatial resolution and exposure number. We first feed a low-resolution version of the input sequence to a fully convolutional network for weight map prediction. We then jointly upsample the weight maps using a guided filter. The final image is computed by a weighted fusion. Unlike conventional MEF methods, MEF-Net is trained end-to-end by optimizing the perceptually calibrated MEF structural similarity (MEF-SSIM) index over a database of training sequences at full resolution. Across an independent set of test sequences, we find that the optimized MEF-Net achieves consistent improvement in visual quality for most sequences, and runs 10 to 1000 times faster than state-of-the-art methods. The code is made publicly available at.

中文翻译:


用于快速多重曝光图像融合的深度引导学习。



我们提出了一种快速多重曝光图像融合(MEF)方法,即 MEF-Net,用于任意空间分辨率和曝光次数的静态图像序列。我们首先将输入序列的低分辨率版本提供给全卷积网络以进行权重图预测。然后,我们使用引导滤波器联合对权重图进行上采样。最终图像通过加权融合计算。与传统的 MEF 方法不同,MEF-Net 通过在全分辨率训练序列数据库上优化感知校准 MEF 结构相似性 (MEF-SSIM) 指数来进行端到端训练。在一组独立的测试序列中,我们发现优化后的 MEF-Net 在大多数序列的视觉质量上实现了一致的改进,并且运行速度比最先进的方法快 10 到 1000 倍。该代码公开于:
更新日期:2020-04-22
down
wechat
bug