Pattern Recognition Letters ( IF 3.255 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.patrec.2020.07.041 Junyi Xie; Hao Bian; Yuanhang Wu; Yu Zhao; Linmin Shan; Shijie Hao
Recently, extensive research efforts have been made on low-light image enhancement. Many novel models have been proposed, such as the ones based on the Retinex theory, multiple exposure fusion, and deep neural networks. However, current models do not directly consider the semantic information in the modeling process. As a result, they tend to introduce more artifacts, such as boosted noises and unnatural visual appearances. To address this issue, we propose a fusion-based low-light enhancement model that explicitly harnesses the scene semantics into the enhancement process. In constructing the fusion map, the image regions with a specific semantic category is firstly extracted via semantic segmentation. Then, they are further combined and refined jointly with an illumination-aware map estimated from the scene illumination. Guided by the semantic information, our model is able to intentionally enhance a part of the dark regions, which therefore generates enhanced results with more natural appearances and less artifacts. In experiments, we first validate our model with some empirical studies, including parameter sensitivity and segmentation error tolerance. Then we compare our model with several state-of-the-art low-light enhancement methods, which further shows the effectiveness and advantage of our model.