当前位置: X-MOL 学术Ind. Rob. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Image-based control of delta parallel robots via enhanced LCM-CSM to track moving objects
Industrial Robot ( IF 1.9 ) Pub Date : 2020-04-20 , DOI: 10.1108/ir-09-2019-0197
J. Guillermo Lopez-Lara , Mauro Eduardo Maya , Alejandro González , Antonio Cardenas , Liliana Felix

Purpose

The purpose of this paper is to present a new vision-based control method, which enables delta-type parallel robots to track and manipulate objects moving in arbitrary trajectories. This constitutes an enhanced variant of the linear camera model-camera space manipulation (LCM-CSM).

Design/methodology/approach

After obtaining the LCM-CSM view parameters, a moving target’s position and its velocity are estimated in camera space using Kalman filter. The robot is then commanded to reach the target. The proposed control strategy has been experimentally validated using a PARALLIX LKF-2040, an academic delta-type parallel platform and seven different target trajectories for which the positioning errors were recorded.

Findings

For objects that moved manually along a sawtooth, zigzag or increasing spiral trajectory with changing velocities, a maximum positioning error of 4.31 mm was found, whereas objects that moved on a conveyor belt at constant velocity ranging from 7 to 12 cm/s, average errors between 2.2-2.75 mm were obtained. For static objects, an average error of 1.48 mm was found. Without vision-based control, the experimental platform used has a static positioning accuracy of 3.17 mm.

Practical implications

The LCM-CSM method has a low computational cost and does not require calibration or computation of Jacobians. The new variant of LCM-CSM takes advantage of aforementioned characteristics and applies them to vision-based control of parallel robots interacting with moving objects.

Originality/value

A new variant of the LCM-CSM method, traditionally used only for static positioning of a robot’s end-effector, was applied to parallel robots enabling the manipulation of objects moving along unknown trajectories.



中文翻译:

通过增强的LCM-CSM对增量并行机器人进行基于图像的控制,以跟踪运动对象

目的

本文的目的是提出一种新的基于视觉的控制方法,该方法可以使增量型并行机器人跟踪和操纵在任意轨迹中移动的对象。这构成了线性相机模型-相机空间操纵(LCM-CSM)的增强型。

设计/方法/方法

获取LCM-CSM视图参数后,使用卡尔曼滤波器在摄像机空间中估计移动目标的位置及其速度。然后命令机器人到达目标。拟议的控制策略已使用PARALLIX LKF-2040,学术型三角型并行平台和记录定位误差的七个不同目标轨迹进行了实验验证。

发现

对于沿锯齿形,锯齿形或不断变化的螺旋轨迹随速度变化而手动移动的物体,发现最大定位误差为4.31 mm,而在传送带上以7至12 cm / s的恒定速度移动的物体的平均误差为获得在2.2-2.75mm之间。对于静态物体,发现平均误差为1.48毫米。如果没有基于视觉的控制,则使用的实验平台的静态定位精度为3.17 mm。

实际影响

LCM-CSM方法具有较低的计算成本,并且不需要校准或雅可比计算。LCM-CSM的新变体利用了上述特征,并将其应用于与运动对象交互的并行机器人的基于视觉的控制。

创意/价值

传统上仅用于机器人末端执行器的静态定位的LCM-CSM方法的新变体应用于并行机器人,从而能够操纵沿未知轨迹移动的对象。

更新日期:2020-04-20
down
wechat
bug