当前位置: X-MOL 学术Signal Process. Image Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Occlusion detection and drift-avoidance framework for 2D visual object tracking
Signal Processing: Image Communication ( IF 3.4 ) Pub Date : 2020-10-03 , DOI: 10.1016/j.image.2020.116011
Iason Karakostas , Vasileios Mygdalis , Anastasios Tefas , Ioannis Pitas

This paper presents a long-term 2D tracking framework for the coverage of live outdoor (e.g., sports) events that is suitable for embedded system application (e.g. Unmanned Aerial Vehicles). This application scenario requires 2D target (e.g., athlete, ball, bicycle, boat) tracking for visually assisting the UAV pilot (or cameraman) to maintain proper target framing, or even for actual 3D target following/localization when the drone flies autonomously. In these cases, it should be expected that the target to be tracked/followed, may disappear from the UAV camera field of view, due to fast 3D target motion, illumination changes, or due to visual target occlusions by obstacles, even if the actual UAV continues following it (either autonomously, by exploiting alternative target localization sensors, or by pilot maneuvering). Therefore, the 2D tracker should be able to recover from such situations. The proposed framework solves exactly this problem. Target occlusions are detected from the 2D tracker responses. Depending on the occlusion immensity, the proposed framework decides whether to not update the tracking model, or to employ target re-detection in a broader window. As a result, the proposed framework allows continued target tracking once the target re-appears in the video stream, without tracker re-initialization.



中文翻译:

用于2D视觉对象跟踪的遮挡检测和避免漂移框架

本文提出了适用于嵌入式系统应用(例如无人飞行器)的户外实时(例如体育)事件报道的长期2D跟踪框架。该应用场景需要2D目标(例如,运动员,球,自行车,船)进行跟踪,以在视觉上协助无人机飞行员(或摄影师)保持适当的目标取景,甚至在无人机自动飞行时进行实际3D目标跟随/定位。在这些情况下,应该期望由于快速的3D目标运动,照明变化或由于障碍物造成的视觉目标遮挡,即使是实际的目标,被跟踪/跟随的目标也可能会从无人机摄像机的视野中消失。无人机会继续跟踪(通过自主地,通过使用替代目标定位传感器或通过飞行员操纵)。因此,2D跟踪器应该能够从这种情况下恢复。所提出的框架恰好解决了这个问题。从2D跟踪器响应中检测到目标遮挡。根据遮挡的巨大程度,所提出的框架决定是不更新跟踪模型,还是在更大的范围内采用目标重新检测。结果,一旦目标重新出现在视频流中,提出的框架就允许连续的目标跟踪,而无需重新初始化跟踪器。

更新日期:2020-10-12
down
wechat
bug