当前位置: X-MOL 学术Image Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning reliable-spatial and spatial-variation regularization correlation filters for visual tracking
Image and Vision Computing ( IF 4.2 ) Pub Date : 2020-01-10 , DOI: 10.1016/j.imavis.2020.103869
Hengcheng Fu , Yihong Zhang , Wuneng Zhou , Xiaofeng Wang , Huanlong Zhang

Single-object tracking is a significant and challenging computer vision problem. Recently, discriminative correlation filters (DCF) have shown excellent performance. But there is a theoretical defects that the boundary effect, caused by the periodic assumption of training samples, greatly limit the tracking performance. Spatially regularized DCF (SRDCF) introduces a spatial regularization to penalize the filter coefficients depending on their spatial location, which improves the tracking performance a lot. However, this simple regularization strategy implements unequal penalties for the target area filter coefficients, which makes the filter learn a distorted object appearance model. In this paper, a novel spatial regularization strategy is proposed, utilizing a reliability map to approximate the target area and to keep the penalty coefficients of relevant region consistent. Besides, we introduce a spatial variation regularization component that the second-order difference of the filter, which smooths changes of filter coefficients to prevent the filter over-fitting current frame. Furthermore, an efficient optimization algorithm called alternating direction method of multipliers (ADMM) is developed. Comprehensive experiments are performed on three benchmark datasets: OTB-2013, OTB-2015 and TempleColor-128, and our algorithm achieves a more favorable performance than several state-of-the-art methods. Compared with SRDCF, our approach obtains an absolute gain of 6.6% and 5.1% in mean distance precision on OTB-2013 and OTB-2015, respectively. Our approach runs in real-time on a CPU.



中文翻译:

学习可靠的空间和空间变化正则化相关滤波器以进行视觉跟踪

单目标跟踪是一个重要且具有挑战性的计算机视觉问题。最近,判别相关滤波器(DCF)表现出出色的性能。但是存在一个理论上的缺陷,即训练样本的周期性假设引起的边界效应极大地限制了跟踪性能。空间正则化DCF(SRDCF)引入了空间正则化,以根据滤波器系数的空间位置对滤波器系数进行惩罚,从而大大提高了跟踪性能。但是,这种简单的正则化策略对目标区域滤镜系数实施了不相等的惩罚,这使滤镜学习了失真的对象外观模型。本文提出了一种新颖的空间正则化策略,利用可靠性图来近似目标区域并保持相关区域的惩罚系数一致。此外,我们引入了一个空间变化正则化分量,即滤波器的二阶差分,它平滑了滤波器系数的变化,以防止滤波器过拟合当前帧。此外,还开发了一种有效的优化算法,称为乘法器交替方向方法(ADMM)。在三个基准数据集上进行了全面的实验:OTB-2013,OTB-2015和TempleColor-128,与几种最新方法相比,我们的算法实现了更出色的性能。与SRDCF相比,我们的方法在OTB-2013和OTB-2015上的平均距离精度分别获得了6.6%和5.1%的绝对增益。我们的方法在CPU上实时运行。

更新日期:2020-01-10
down
wechat
bug