当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep-Masking Generative Network: A Unified Framework for Background Restoration From Superimposed Images
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2021-05-05 , DOI: 10.1109/tip.2021.3076589
Xin Feng , Wenjie Pei , Zihui Jia , Fanglin Chen , David Zhang , Guangming Lu

Restoring the clean background from the superimposed images containing a noisy layer is the common crux of a classical category of tasks on image restoration such as image reflection removal, image deraining and image dehazing. These tasks are typically formulated and tackled individually due to diverse and complicated appearance patterns of noise layers within the image. In this work we present the Deep-Masking Generative Network ( DMGN ), which is a unified framework for background restoration from the superimposed images and is able to cope with different types of noise. Our proposed DMGN follows a coarse-to-fine generative process: a coarse background image and a noise image are first generated in parallel, then the noise image is further leveraged to refine the background image to achieve a higher-quality background image. In particular, we design the novel Residual Deep-Masking Cell as the core operating unit for our DMGN to enhance the effective information and suppress the negative information during image generation via learning a gating mask to control the information flow. By iteratively employing this Residual Deep-Masking Cell, our proposed DMGN is able to generate both high-quality background image and noisy image progressively. Furthermore, we propose a two-pronged strategy to effectively leverage the generated noise image as contrasting cues to facilitate the refinement of the background image. Extensive experiments across three typical tasks for image background restoration, including image reflection removal, image rain steak removal and image dehazing, show that our DMGN consistently outperforms state-of-the-art methods specifically designed for each single task.

中文翻译:

深层生成网络:用于从叠加图像还原背景的统一框架

从包含噪声层的叠加图像中恢复干净的背景是图像恢复中经典任务类别(例如图像反射去除,图像去水和图像去雾)的常见症结。由于图像中噪声层的多样化和复杂的外观模式,通常单独制定和解决这些任务。在这项工作中,我们介绍了深度生成网络( DMGN ),这是一个用于从叠加的图像中恢复背景的统一框架,并且能够应对各种类型的噪音。我们的建议DMGN遵循从粗到细的生成过程:首先并行生成粗略的背景图像和噪声图像,然后进一步利用噪声图像来细化背景图像,以获得更高质量的背景图像。特别是,我们设计了新型残留深层处理单元作为我们的核心操作单元DMGN通过学习门控掩模来控制信息流,从而在图像生成过程中增强有效信息并抑制负面信息。通过迭代使用此残留深层单元,我们提出了DMGN能够逐渐生成高质量的背景图像和嘈杂的图像。此外,我们提出了两管齐下的策略,可以有效地利用生成的噪声图像作为对比提示,以利于背景图像的细化。针对图像背景恢复的三个典型任务进行了广泛的实验,包括图像反射去除,图像雨牛排去除和图像除雾,这些表明DMGN 始终优于专门为每个任务设计的最新方法。
更新日期:2021-05-11
down
wechat
bug