当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantically-guided low-light image enhancement
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.patrec.2020.07.041
Junyi Xie , Hao Bian , Yuanhang Wu , Yu Zhao , Linmin Shan , Shijie Hao

Recently, extensive research efforts have been made on low-light image enhancement. Many novel models have been proposed, such as the ones based on the Retinex theory, multiple exposure fusion, and deep neural networks. However, current models do not directly consider the semantic information in the modeling process. As a result, they tend to introduce more artifacts, such as boosted noises and unnatural visual appearances. To address this issue, we propose a fusion-based low-light enhancement model that explicitly harnesses the scene semantics into the enhancement process. In constructing the fusion map, the image regions with a specific semantic category is firstly extracted via semantic segmentation. Then, they are further combined and refined jointly with an illumination-aware map estimated from the scene illumination. Guided by the semantic information, our model is able to intentionally enhance a part of the dark regions, which therefore generates enhanced results with more natural appearances and less artifacts. In experiments, we first validate our model with some empirical studies, including parameter sensitivity and segmentation error tolerance. Then we compare our model with several state-of-the-art low-light enhancement methods, which further shows the effectiveness and advantage of our model.



中文翻译:

语义引导的微光图像增强

近来,已经在弱光图像增强方面进行了广泛的研究。已经提出了许多新颖的模型,例如基于Retinex理论的模型,多重曝光融合和深度神经网络。但是,当前模型在建模过程中并未直接考虑语义信息。结果,它们倾向于引入更多的伪像,例如增强的噪音和不自然的视觉外观。为了解决这个问题,我们提出了一种基于融合的微光增强模型,该模型将场景语义明确地运用到增强过程中。在构造融合图时,首先通过语义分割提取具有特定语义类别的图像区域。然后,将它们进一步结合在一起,并与根据场景照明估算的照明感知图一起完善。在语义信息的指导下,我们的模型能够有意增强暗区的一部分,从而以更自然的外观和更少的伪像生成增强的结果。在实验中,我们首先通过一些经验研究来验证我们的模型,包括参数敏感性和分段误差容限。然后,我们将模型与几种最新的微光增强方法进行比较,从而进一步证明了模型的有效性和优势。

更新日期:2020-08-05
down
wechat
bug