当前位置: X-MOL 学术IEEE Trans. Circ. Syst. Video Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
To See in the Dark: N2DGAN for Background Modeling in Nighttime Scene
IEEE Transactions on Circuits and Systems for Video Technology ( IF 8.4 ) Pub Date : 2021-02-01 , DOI: 10.1109/tcsvt.2020.2987874
Zhenfeng Zhu , Yingying Meng , Deqiang Kong , Xingxing Zhang , Yandong Guo , Yao Zhao

Due to the deteriorated conditions of \mbox{illumination} lack and uneven lighting, nighttime images have lower contrast and higher noise than their daytime counterparts of the same scene, which limits seriously the performances of conventional background modeling methods. For such a challenging problem of background modeling under nighttime scene, an innovative and reasonable solution is proposed in this paper, which paves a new way completely different from the existing ones. To make background modeling under nighttime scene performs as well as in daytime condition, we put forward a promising generation-based background modeling framework for foreground surveillance. With a pre-specified daytime reference image as background frame, the {\bfseries GAN} based generation model, called {\bfseries N2DGAN}, is trained to transfer each frame of {\bfseries n}ighttime video {\bfseries to} a virtual {\bfseries d}aytime image with the same scene to the reference image except for the foreground region. Specifically, to balance the preservation of background scene and the foreground object(s) in generating the virtual daytime image, we present a two-pathway generation model, in which the global and local sub-networks are well combined with spatial and temporal consistency constraints. For the sequence of generated virtual daytime images, a multi-scale Bayes model is further proposed to characterize pertinently the temporal variation of background. We evaluate on collected datasets with manually labeled ground truth, which provides a valuable resource for related research community. The impressive results illustrated in both the main paper and supplementary show efficacy of our proposed approach.

中文翻译:

在黑暗中看到:N2DGAN 用于夜间场景中的背景建模

由于 \mbox{illumination} 缺乏和光照不均的恶化条件,夜间图像比同一场景的白天图像具有更低的对比度和更高的噪声,这严重限制了传统背景建模方法的性能。针对夜间场景下背景建模这一具有挑战性的问题,本文提出了一种创新且合理的解决方案,开辟了一条与现有方法完全不同的新途径。为了使夜间场景下的背景建模与白天条件下的表现一样好,我们提出了一个有前途的基于生成的前景监视背景建模框架。以预先指定的白天参考图像作为背景帧,基于 {\bfseries GAN} 的生成模型,称为 {\bfseries N2DGAN},训练将 {\bfseries n}nighttime video {\bfseries to} 具有相同场景的虚拟 {\bfseries}daytime 图像传输到参考图像(前景区域除外)的每一帧。具体来说,为了在生成虚拟白天图像时平衡背景场景和前景对象的保留,我们提出了一种双路径生成模型,其中全局和局部子网络与空间和时间一致性约束很好地结合. 对于生成的虚拟白天图像序列,进一步提出了多尺度贝叶斯模型来有针对性地表征背景的时间变化。我们使用手动标记的地面实况对收集的数据集进行评估,这为相关研究社区提供了宝贵的资源。
更新日期:2021-02-01
down
wechat
bug