当前位置: X-MOL 学术IEEE Trans. Instrum. Meas. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Vision-based Measurement and Prediction of Object Trajectory for Robotic Manipulation in Dynamic and Uncertain Scenarios†
IEEE Transactions on Instrumentation and Measurement ( IF 5.6 ) Pub Date : 2020-11-01 , DOI: 10.1109/tim.2020.2994602
Chongkun Xia , Ching-Yen Weng , Yunzhou Zhang , I-Ming Chen

Vision-based measurement and prediction (VMP) are the very important and challenging part for autonomous robotic manipulation, especially in dynamic and uncertain scenarios. However, due to the potential limitations of visual measurement in such an environment such as occlusion, lighting, and hardware limitations, it is not easy to acquire the accurate positions of an object as the observations. Moreover, manipulating a dynamic object with unknown or uncertain motion rules usually requires an accurate prediction of motion trajectory at the desired moment, which dramatically increases the difficulty. To address the problem, we propose a time granularity-based vision prediction framework whose core is an integrated prediction model based on multiple [i.e., long short-term memory (LSTM)] neural networks. At first, we use the vision sensor to acquire raw measurements and adopt the preprocessing method (e.g., data completion, error compensation, and filtering) to turn raw measurements into the standard trajectory data. Then, we devise a novel integration strategy based on time granularity boost (TG-Boost) to select appropriate base predictors and further utilize these history trajectory data to construct the high-precision prediction model. Finally, we use the simulation and a series of dynamic manipulation experiments to validate the proposed methodology. The results also show that our method outperforms the state-of-the-art prediction algorithms in terms of prediction accuracy, success rate, and robustness.

中文翻译:

基于视觉的物体轨迹测量和预测,用于动态和不确定场景中的机器人操作†

基于视觉的测量和预测 (VMP) 是自主机器人操作中非常重要和具有挑战性的部分,尤其是在动态和不确定的场景中。然而,由于在遮挡、光照和硬件限制等环境下视觉测量的潜在局限性,获取物体的准确位置作为观察并不容易。此外,操纵运动规则未知或不确定的动态物体通常需要准确预测所需时刻的运动轨迹,这大大增加了难度。为了解决这个问题,我们提出了一个基于时间粒度的视觉预测框架,其核心是一个基于多个[即长短期记忆(LSTM)]神经网络的集成预测模型。首先,我们使用视觉传感器获取原始测量值,并采用预处理方法(例如,数据完成、误差补偿和过滤)将原始测量值转换为标准轨迹数据。然后,我们设计了一种基于时间粒度提升(TG-Boost)的新集成策略来选择合适的基础预测器,并进一步利用这些历史轨迹数据来构建高精度预测模型。最后,我们使用模拟和一系列动态操作实验来验证所提出的方法。结果还表明,我们的方法在预测准确性、成功率和鲁棒性方面优于最先进的预测算法。和过滤)将原始测量值转换为标准轨迹数据。然后,我们设计了一种基于时间粒度提升(TG-Boost)的新集成策略来选择合适的基础预测器,并进一步利用这些历史轨迹数据来构建高精度预测模型。最后,我们使用模拟和一系列动态操作实验来验证所提出的方法。结果还表明,我们的方法在预测准确性、成功率和鲁棒性方面优于最先进的预测算法。和过滤)将原始测量值转换为标准轨迹数据。然后,我们设计了一种基于时间粒度提升(TG-Boost)的新集成策略来选择合适的基础预测器,并进一步利用这些历史轨迹数据来构建高精度预测模型。最后,我们使用模拟和一系列动态操作实验来验证所提出的方法。结果还表明,我们的方法在预测准确性、成功率和鲁棒性方面优于最先进的预测算法。我们使用模拟和一系列动态操作实验来验证所提出的方法。结果还表明,我们的方法在预测准确性、成功率和鲁棒性方面优于最先进的预测算法。我们使用模拟和一系列动态操作实验来验证所提出的方法。结果还表明,我们的方法在预测准确性、成功率和鲁棒性方面优于最先进的预测算法。
更新日期:2020-11-01
down
wechat
bug