当前位置: X-MOL 学术Rob. Auton. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A GPU-accelerated model-based tracker for untethered submillimeter grippers
Robotics and Autonomous Systems ( IF 4.3 ) Pub Date : 2018-05-01 , DOI: 10.1016/j.robot.2017.11.003
Stefano Scheggi 1 , ChangKyu Yoon 2 , Arijit Ghosh 2 , David H Gracias 2, 3 , Sarthak Misra 1, 4
Affiliation  

Miniaturized grippers that possess an untethered structure are suitable for a wide range of tasks, ranging from micromanipulation and microassembly to minimally invasive surgical interventions. In order to robustly perform such tasks, it is critical to properly estimate their overall configuration. Previous studies on tracking and control of miniaturized agents estimated mainly their 2D pixel position, mostly using cameras and optical images as a feedback modality. This paper presents a novel solution to the problem of estimating and tracking the 3D position, orientation and configuration of the tips of submillimeter grippers from marker-less visual observations. We consider this as an optimization problem, which is solved using a variant of the Particle Swarm Optimization algorithm. The proposed approach has been implemented in a Graphics Processing Unit (GPU) which allows a user to track the submillimeter agents online. The proposed approach has been evaluated on several image sequences obtained from a camera and on B-mode ultrasound images obtained from an ultrasound probe. The sequences show the grippers moving, rotating, opening/closing and grasping biological material. Qualitative results obtained using both hydrogel (soft) and metallic (hard) grippers with different shapes and sizes ranging from 750 microns to 4 mm (tip to tip), demonstrate the capability of the proposed method to track the agent in all the video sequences. Quantitative results obtained by processing synthetic data reveal a tracking position error of 25 ± 7 μm and orientation error of 1.7 ± 1.3 degrees. We believe that the proposed technique can be applied to different stimuli responsive miniaturized agents, allowing the user to estimate the full configuration of complex agents from visual marker-less observations.

中文翻译:

基于 GPU 加速模型的追踪器,适用于无绳亚毫米夹具

具有不受束缚结构的微型夹具适用于各种任务,从显微操作和微组装到微创外科手术。为了稳健地执行此类任务,正确估计其整体配置至关重要。先前关于小型智能体跟踪和控制的研究主要估计其二维像素位置,大多使用相机和光学图像作为反馈方式。本文提出了一种新颖的解决方案,解决从无标记视觉观察中估计和跟踪亚毫米夹具尖端的 3D 位置、方向和配置的问题。我们认为这是一个优化问题,可以使用粒子群优化算法的变体来解决。所提出的方法已在图形处理单元(GPU)中实现,该单元允许用户在线跟踪亚毫米代理。所提出的方法已在从相机获得的多个图像序列和从超声探头获得的 B 模式超声图像上进行了评估。这些序列显示了夹具移动、旋转、打开/关闭和抓取生物材料。使用不同形状和尺寸范围从 750 微米到 4 毫米(尖端到尖端)的水凝胶(软)和金属(硬)夹具获得的定性结果证明了所提出的方法在所有视频序列中跟踪代理的能力。通过处理合成数据获得的定量结果显示,跟踪位置误差为 25 ± 7 μm,方向误差为 1.7 ± 1.3 度。我们相信所提出的技术可以应用于不同的刺激响应小型代理,允许用户从无视觉标记的观察中估计复杂代理的完整配置。
更新日期:2018-05-01
down
wechat
bug