当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Recurrent Context Aggregation Network for Single Image Dehazing
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2021-02-08 , DOI: 10.1109/lsp.2021.3056961
Chen Wang , Runqing Chen , Yang Lu , Yan Yan , Hanzi Wang

Existing learning-based dehazing methods are prone to cause excessive dehazing and failure to dense haze, mainly because that the global features of hazy images are not fully utilized, while the local features of hazy images are not enough discriminative. In this letter, we propose a Recurrent Context Aggregation Network (RCAN) to effectively dehaze images and restore color fidelity. In RCAN, an efficient and generic module, called Context Aggression Block (CAB), is designed to improve the feature representation by taking advantage of both global and local features, which are complementary for robust dehazing because that local features can capture different levels of haze, and global features can focus on textures and object edges of a whole image. In addition, RCAN adopts a deep recurrent mechanism to improve the dehazing performance without introducing additional network parameters. Extensive experimental results on both synthetic and real-world datasets show that the proposed RCAN performs better than other state-of-the-art dehazing methods.

中文翻译:

用于单图像去雾的循环上下文聚合网络

现有的基于学习的除雾方法易于引起过度的除雾和致密的雾化失败,这主要是因为模糊图像的全局特征没有得到充分利用,而模糊图像的局部特征却没有足够的判别力。在这封信中,我们提出了一个循环上下文聚合网络(RCAN),以有效地对图像进行除雾并恢复色彩保真度。在RCAN中,一种有效且通用的模块,称为上下文侵略性块(CAB),旨在通过利用全局和局部特征来改进特征表示,这对于鲁棒除雾是补充,因为局部特征可以捕获不同程度的雾度以及全局特征可以集中在整个图像的纹理和对象边缘上。此外,RCAN采用深度循环机制来提高除雾性能,而无需引入其他网络参数。在合成和真实数据集上的大量实验结果表明,所提出的RCAN的性能优于其他最新的除雾方法。
更新日期:2021-03-05
down
wechat
bug