当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Adversarial Network for Scene Independent Moving Object Segmentation
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2021-02-12 , DOI: 10.1109/lsp.2021.3059195
Prashant W. Patil , Akshay Dudhane , Subrahmanyam Murala , Anil Balaji Gonde

The current prevailing algorithms highly depend on additional pre-trained modules trained for other applications or complicated training procedures or neglect the inter-frame spatio-temporal structural dependencies. Also, the generalized effect of existing works with completely unseen data is difficult to identify. Specifically, the outdoor videos suffer from adverse atmospheric conditions like poor visibility, inclement weather, etc. In this letter, a novel end-to-end multi-scale temporal edge aggregation (MTPA) network is proposed with adversarial learning for scene dependent and independent object segmentation. The MTPA is proposed to extract the comprehensive spatio-temporal features from the current and reference frame. These MTPA features are used to guide the respective decoder through skip connections. To get authentic and consistent foreground object(s), the respective scale feedback of previous frame output is provided with respective MTPA features at each decoder input. The performance analysis of the proposed method is verified on CDnet-2014 and LASIESTA video datasets. The proposed method outperforms the existing state-of-the-art methods with scene dependent and independent analysis.

中文翻译:

用于场景独立运动对象分割的深度对抗网络

当前流行的算法高度依赖于针对其他应用或复杂的训练过程而训练的其他预训练模块,或者忽略了帧间时空结构依赖性。而且,很难确定具有完全看不见的数据的现有作品的一般效果。具体来说,室外视频会遇到不利的大气条件,例如能见度差,天气恶劣等。在这封信中,提出了一种新型的端到端多尺度时间边缘聚合(MTPA)网络,该网络具有对抗性学习功能,可用于场景相关和独立对象分割。提出MTPA可以从当前框架和参考框架中提取综合的时空特征。这些MTPA功能用于通过跳过连接引导相应的解码器。为了获得真实和一致的前景对象,在每个解码器输入处为前一帧输出的各个比例反馈提供了相应的MTPA功能。在CDnet-2014和LASIESTA视频数据集上验证了该方法的性能分析。所提出的方法在性能上优于现有的最新方法视场景而定独立 分析。
更新日期:2021-03-16
down
wechat
bug