当前位置: X-MOL 学术IEEE Open J. Intell. Transp. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cascaded Feature-Mask Fusion for Foreground Segmentation
IEEE Open Journal of Intelligent Transportation Systems Pub Date : 2022-04-25 , DOI: 10.1109/ojits.2022.3170075
Chuanyun Xu 1 , Huan Liu 1 , Tenghui Li 1 , Yang Zhang 2 , Tian Li 3 , Gang Li 1
Affiliation  

Foreground segmentation aims at extracting moving objects from the background in a robust manner under various challenging scenarios. The deep learning-based methods have achieved remarkable improvement in this field. These methods produce semantically correct predictions based on extracted rich semantic features yet perform poorly on segmentation of edge details. The main reason is that the high-level features extracted by the deep network lose the high-frequency information for the successful edge segmentation. On this basis, we propose a novel segmentation network with a cascade architecture to refine segmentation results step by step by introducing detailed information into high-level features. The network recorrects and optimizes the segmentation maps in each step so that more accurate segmentation results are obtained. Furthermore, we evaluate our approach on the challenging CDnet2014 dataset and achieve an F-measure of 0.9868. Our approach thus outperforms previous methods, such as FgSegNet_v2, FgSegNet, BSPVGan, Cascade CNN, IUTIS-5, WeSamBE, DeepBS, and GMM-Stauffer.

中文翻译:

用于前景分割的级联特征掩码融合

前景分割旨在在各种具有挑战性的场景下以稳健的方式从背景中提取移动对象。基于深度学习的方法在该领域取得了显着的进步。这些方法基于提取的丰富语义特征产生语义正确的预测,但在边缘细节的分割上表现不佳。主要原因是深度网络提取的高级特征丢失了边缘分割成功的高频信息。在此基础上,我们提出了一种具有级联架构的新型分割网络,通过将详细信息引入高级特征来逐步细化分割结果。网络在每一步中对分割图进行重新校正和优化,从而获得更准确的分割结果。此外,我们在具有挑战性的 CDnet2014 数据集上评估了我们的方法,并实现了 0.9868 的 F 度量。因此,我们的方法优于以前的方法,例如 FgSegNet_v2、FgSegNet、BSPVGan、Cascade CNN、IUTIS-5、WeSamBE、DeepBS 和 GMM-Stauffer。
更新日期:2022-04-25
down
wechat
bug