当前位置: X-MOL 学术Int. J. Adv. Robot. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust monocular 3D object pose tracking for large visual range variation in robotic manipulation via scale-adaptive region-based method
International Journal of Advanced Robotic Systems ( IF 2.1 ) Pub Date : 2022-02-16 , DOI: 10.1177/17298806221076978
Jiexin Zhou 1 , Zi Wang 1 , Yunna Bao 2 , Qiufu Wang 1 , Xiaoliang Sun 1, 3 , Qifeng Yu 1, 3
Affiliation  

Many robot manipulation processes involve large visual range variation between the hand-eye camera and the object, which in turn causes object scale change of a large span in the image sequence captured by the camera. In order to accurately guide the manipulator, the relative 6 degree of freedom (6D) pose between the object and manipulator is continuously required in the process. The large-span scale change of the object in the image sequence often leads to the 6D pose tracking failure of the object for existing pose tracking methods. To tackle this problem, this article proposes a novel scale-adaptive region-based monocular pose tracking method. Firstly, the impact of the object scale on the convergence performance of the local region-based pose tracker is meticulously tested and analyzed. Then, a universal region radius calculation model based on object scale is built based on the statical analysis result. Finally, we develop a novel scale-adaptive localized region-based pose tracking model by merging the scale-adaptive radius selection mechanism into the local region-based method. The proposed method adjusts local region size according to the scale of the object projection and achieves robust pose tracking. Experiment results on synthetic and real image sequences indicate that the proposed method achieves better performance over the traditional localized region-based method in manipulator operation scenarios which involve large visual range variation.



中文翻译:

基于尺度自适应区域方法的机器人操作中大视觉范围变化的鲁棒单目 3D 对象姿态跟踪

许多机器人操作过程涉及手眼相机和物体之间较大的视觉范围变化,这反过来又导致相机捕获的图像序列中的物体尺度变化很大。为了准确引导机械手,过程中不断需要物体与机械手之间的相对6自由度(6D)位姿。对于现有的位姿跟踪方法,图像序列中物体的大跨度尺度变化往往导致物体的6D位姿跟踪失败。为了解决这个问题,本文提出了一种新颖的基于尺度自适应区域的单目姿态跟踪方法。首先,对目标尺度对基于局部区域的姿态跟踪器收敛性能的影响进行了细致的测试和分析。然后,基于静态分析结果,建立了基于物体尺度的通用区域半径计算模型。最后,我们通过将尺度自适应半径选择机制合并到基于局部区域的方法中,开发了一种新的尺度自适应基于局部区域的姿态跟踪模型。该方法根据目标投影的尺度调整局部区域大小,实现鲁棒的姿态跟踪。在合成图像序列和真实图像序列上的实验结果表明,在涉及大视距变化的机械手操作场景中,所提出的方法比传统的基于局部区域的方法具有更好的性能。我们通过将尺度自适应半径选择机制合并到基于局部区域的方法中,开发了一种新的尺度自适应基于局部区域的姿态跟踪模型。该方法根据目标投影的尺度调整局部区域大小,实现鲁棒的姿态跟踪。在合成图像序列和真实图像序列上的实验结果表明,在涉及大视距变化的机械手操作场景中,所提出的方法比传统的基于局部区域的方法具有更好的性能。我们通过将尺度自适应半径选择机制合并到基于局部区域的方法中,开发了一种新的尺度自适应基于局部区域的姿态跟踪模型。该方法根据目标投影的尺度调整局部区域大小,实现鲁棒的姿态跟踪。在合成图像序列和真实图像序列上的实验结果表明,在涉及大视距变化的机械手操作场景中,所提出的方法比传统的基于局部区域的方法具有更好的性能。

更新日期:2022-02-16
down
wechat
bug