当前位置: X-MOL 学术IEEE Trans. Circ. Syst. Video Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unifying Temporal Context and Multi-feature with Update-Pacing Framework for Visual Tracking
IEEE Transactions on Circuits and Systems for Video Technology ( IF 8.3 ) Pub Date : 2020-04-01 , DOI: 10.1109/tcsvt.2019.2902883
Yuefang Gao , Zexi Hu , Henry Wing Fung Yeung , Yuk Ying Chung , Xuhong Tian , Liang Lin

Model drifting is one of the knotty problems that seriously restricts the accuracy of discriminative trackers in visual tracking. Most existing works usually focus on improving the robustness of the target appearance model. However, they are prone to suffer from model drifting due to the inappropriate model updates during the tracking-by-detection. In this paper, we propose a novel update-pacing framework to suppress the occurrence of model drifting in visual tracking. Specifically, the proposed framework first initializes an ensemble of trackers, each of which updates the model in a different update interval. Once the forward tracking trajectory of each tracker is determined, the backward trajectory will also be generated by the current model to measure the difference with the forward one, and the tracker with the smallest deviation score will be selected as the most robust tracker for the remaining tracking. By performing such self-examination on trajectory pairs, the framework can effectively preserve the temporal context consistency of sequential frames to avoid learning corrupted information. To further improve the performance of the proposed method, a multi-feature extension framework is also proposed to incorporate multiple features into the ensemble of the trackers. The extensive experimental results obtained on large-scale object tracking benchmarks demonstrate that the proposed framework significantly increases the accuracy and robustness of the underlying base trackers, such as DSST, Struck, KCF, and CT, and achieves superior performance compared with the state-of-the-art methods without using deep models.

中文翻译:

使用用于视觉跟踪的更新步调框架统一时间上下文和多功能

模型漂移是严重限制视觉跟踪中判别跟踪器准确性的棘手问题之一。大多数现有工作通常侧重于提高目标外观模型的鲁棒性。然而,由于在逐检测跟踪期间不适当的模型更新,它们容易受到模型漂移的影响。在本文中,我们提出了一种新颖的更新步调框架来抑制视觉跟踪中模型漂移的发生。具体来说,所提出的框架首先初始化一组跟踪器,每个跟踪器以不同的更新间隔更新模型。一旦确定了每个跟踪器的前向跟踪轨迹,当前模型也会生成后向轨迹以测量与前向跟踪器的差异,并且将选择具有最小偏差分数的跟踪器作为剩余跟踪的最健壮的跟踪器。通过对轨迹对执行这种自检,该框架可以有效地保持顺序帧的时间上下文一致性,以避免学习损坏的信息。为了进一步提高所提出方法的性能,还提出了一种多特征扩展框架,以将多个特征合并到跟踪器的集合中。在大规模目标跟踪基准测试中获得的大量实验结果表明,所提出的框架显着提高了底层基础跟踪器(如 DSST、Struck、KCF 和 CT)的准确性和鲁棒性,并且与 state-of - 不使用深度模型的最先进方法。
更新日期:2020-04-01
down
wechat
bug