当前位置: X-MOL 学术IEEE Trans. Circ. Syst. Video Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Target-Distractor Aware Deep Tracking With Discriminative Enhancement Learning Loss
IEEE Transactions on Circuits and Systems for Video Technology ( IF 8.4 ) Pub Date : 2022-04-07 , DOI: 10.1109/tcsvt.2022.3165536
Huanlong Zhang , Liyun Cheng , Tianzhu Zhang , Yanfeng Wang , W.J. Zhang , Jie Zhang

Numerous tracking approaches attempt to improve target representation through target-aware or distractor-aware. However, the unbalanced considerations of target or distractor information make it diffcult for these methods to benefit from the two aspects at the same time. In this paper, we propose a target-distractor aware model with discriminative enhancement learning loss to learn target representation, which can better distinguish the target in complex scenes. Firstly, to enlarge the gap between the target and distractor, we design a discriminative enhancement learning loss. By highlighting the hard negatives that are similar to the target and shrinking the easy negatives that are pure background, the features sensitive to the target or distractor representation can be more conveniently mined. On this basis, we further propose a target-distractor aware model. Unlike existing methods of preference target or distractor, we construct the target-specific feature space by activating the target-sensitive and the distractor-silence feature. Therefore, the appearance model can not only represent the target well but also suppress the background distractor. Finally, the target-distractor aware target representation model is integrated with a Siamese matching network for visual tracking for achieving robust and realtime visual tracking. Extensive experiments are performed on eight tracking benchmarks show that the proposed algorithm achieves favorable performance.
更新日期:2022-04-07
down
wechat
bug