当前位置: X-MOL 学术IETE J. Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast Scale Invariant Tracker and Re-identification for First-Person Social Videos
IETE Journal of Research ( IF 1.3 ) Pub Date : 2020-03-10 , DOI: 10.1080/03772063.2020.1729258
Jyoti Nigam 1 , Renu M. Rameshan 1
Affiliation  

ABSTRACT

We address the problem of pedestrian tracking in videos of crowded scene which are captured by first-person viewpoint. The constant motion of camera and pedestrian makes this task challenging. The prime challenges are natural head motion of wearer and target loss and reappearance in a later frame, due to frequent changes in field of view. We propose that the use of first-person vision specific optical flow information and also the modification in the update process along with search region of trackers are useful to identify a lost target in a later frame. This process is termed re-identification in this paper. The specific trackers modified are MEEM and STRUCK. In addition to re-identification we achieve scale invariant tracking (upto 50% scale variation) and speed up by a factor of 2. We name our tracker as EgoTracker, since it utilizes the information which is specific to egocentric vision.



中文翻译:

第一人称社交视频的快速尺度不变跟踪器和重新识别

摘要

我们解决了第一人称视角拍摄的拥挤场景视频中的行人跟踪问题。摄像机和行人的不断运动使这项任务具有挑战性。主要挑战是佩戴者的自然头部运动以及由于视野频繁变化而导致的目标丢失和在稍后的帧中重新出现。我们建议使用第一人称视觉特定的光流信息以及更新过程中的修改以及跟踪器的搜索区域有助于在稍后的帧中识别丢失的目标。这个过程在本文中称为重新识别。修改的具体跟踪器是 MEEM 和 STRUCK。除了重新识别,我们还实现了尺度不变跟踪(最多50%尺度变化)并加速 2 倍。我们将我们的跟踪器命名为 EgoTracker,因为它利用了特定于以自我为中心的视觉的信息。

更新日期:2020-03-10
down
wechat
bug