当前位置: X-MOL 学术Comput. Vis. Image Underst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Two motion models for improving video object tracking performance
Computer Vision and Image Understanding ( IF 4.5 ) Pub Date : 2020-03-31 , DOI: 10.1016/j.cviu.2020.102951
Ji Qiu , Lide Wang , Yu Hen Hu , Yin Wang

Two motion models are proposed to enhance the performance of video object tracking (VOT) algorithms. The first one is a random walk model that captures the randomness of motion patterns. The second one is a data-adaptive vector auto-regressive (VAR) model that exploits more regular motion patterns. The performance of these models is evaluated empirically using real-world datasets. Three real-time publicly available visual object trackers: the normalized cross-correlation (NCC) tracker, the New Scale Adaptive with Multiple Features (NSAMF) tracker, and the correlation filter neural network (CFNet) are modified using each of these two models. The tracking performances are then compared against the original formulation. It is observed that both models of the prior information lead to performance enhancement of all three trackers. This validates the hypothesis that when training videos are available, prior information embodied in the motion models can improve the tracking performance.



中文翻译:

两种运动模型可改善视频对象的跟踪性能

提出了两种运动模型来增强视频对象跟踪(VOT)算法的性能。第一个是捕获运动模式随机性的随机游走模型。第二个是利用更规则运动模式的数据自适应矢量自回归(VAR)模型。这些模型的性能通过使用实际数据集进行经验评估。使用这两个模型分别修改了三个实时公开可用的视觉对象跟踪器:归一化互相关(NCC)跟踪器,具有多种功能的新尺度自适应(NSAMF)跟踪器和相关滤波器神经网络(CFNet)。然后将跟踪性能与原始公式进行比较。可以看到,先验信息的两个模型都可以提高所有三个跟踪器的性能。

更新日期:2020-03-31
down
wechat
bug