当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
End-to-end DeepNCC framework for robust visual tracking
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2020-03-30 , DOI: 10.1016/j.jvcir.2020.102800
Kaiheng Dai , Yuehuan Wang

In this paper, we propose an NCC-based object tracking deep framework, which can be well initialized with the limited target samples in the first frame. The proposed framework contains a pretrained model, online feature fine-tuning layers and tracking processes. The pretrained model provides rich feature representations while online feature fine-tuning layers select discriminative and generic features for the tracked object. We choose normalized cross-correlation as a template tracking layer to perform the tracking process. To enable the learned features representation closely coordinated to the tracked target, we jointly train the feature representation network and tracking processes. In online tracking, an adaptive template and a fixed template are fused to find the optimal tracking results. Scale estimation and a high-confidence model update scheme are perfectly integrated into the framework to adapt to the target appearance changes. The extensive experiments demonstrate that the proposed tracker achieves superior performance compared with other state-of-the-art trackers.



中文翻译:

端到端DeepNCC框架可实现强大的视觉跟踪

在本文中,我们提出了一个基于NCC的对象跟踪深度框架,该框架可以在第一帧中使用有限的目标样本很好地初始化。提出的框架包含一个预先训练的模型,在线特征微调层和跟踪过程。预训练的模型提供了丰富的特征表示,而在线特征微调层为被跟踪对象选择了区分性特征和通用特征。我们选择归一化互相关作为模板跟踪层来执行跟踪过程。为了使学习到的特征表示与跟踪目标紧密协调,我们共同训练了特征表示网络和跟踪过程。在在线跟踪中,融合自适应模板和固定模板以找到最佳跟踪结果。尺度估计和高可信度模型更新方案完美地集成到框架中,以适应目标外观变化。广泛的实验表明,与其他最新的跟踪器相比,该跟踪器具有更高的性能。

更新日期:2020-03-30
down
wechat
bug