当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-scene foreground segmentation with supervised and unsupervised model communication
Pattern Recognition ( IF 8 ) Pub Date : 2021-04-24 , DOI: 10.1016/j.patcog.2021.107995
Dong Liang , Bin Kang , Xinyu Liu , Pan Gao , Xiaoyang Tan , Shun’ichi Kaneko

In this paper1, we investigate cross-scene video foreground segmentation via supervised and unsupervised model communication. Traditional unsupervised background subtraction methods often face the challenging problem of updating the statistical background model online. In contrast, supervised foreground segmentation methods, such as those that are based on deep learning, rely on large amounts of training data, thereby limiting their cross-scene performance. Our method leverages segmented masks from a cross-scene trained deep model (spatio-temporal attention model (STAM), pyramid scene parsing network (PSPNet), or DeepLabV3+) to seed online updates for the statistical background model (CPB), thereby refining the foreground segmentation. More flexible than methods that require scene-specific training and more data-efficient than unsupervised models, our method outperforms state-of-the-art approaches on CDNet2014, WallFlower, and LIMU according to our experimental results. The proposed framework can be integrated into a video surveillance system in a plug-and-play form to realize cross-scene foreground segmentation.



中文翻译:

有监督和无监督模型通信的跨场景前景分割

本文1,我们研究了通过监督和无监督模型通信进行的跨场景视频前景分割。传统的无监督背景扣除方法通常面临在线更新统计背景模型的挑战性问题。相反,诸如基于深度学习的监督前景分割方法依赖大量训练数据,从而限制了它们的跨场景性能。我们的方法利用跨场景训练的深度模型(时空注意力模型(STAM),金字塔场景解析网络(PSPNet)或DeepLabV3 +)中的分段蒙版为统计背景模型(CPB)的在线更新提供种子,从而完善前景分割。比需要现场特定训练的方法更具灵活性,并且比无人监督的模型具有更高的数据效率,根据我们的实验结果,我们的方法优于CDNet2014,WallFlower和LIMU上的最新方法。所提出的框架可以即插即用的形式集成到视频监控系统中,以实现跨场景前景分割。

更新日期:2021-05-05
down
wechat
bug