当前位置: X-MOL 学术Robot. Comput.-Integr. Manuf. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mixed reality-integrated 3D/2D vision mapping for intuitive teleoperation of mobile manipulator
Robotics and Computer-Integrated Manufacturing ( IF 10.4 ) Pub Date : 2022-04-07 , DOI: 10.1016/j.rcim.2022.102332
Yunpeng Su , Xiaoqi Chen , Tony Zhou , Christopher Pretty , Geoffrey Chase

Depth cues are crucial to increase user perception and spatial awareness of the remote environment when remotely guiding complex robotic systems. A mixed reality (MR) integrated 3D/2D vision and motion mapping framework for immersive and intuitive telemanipulation of a complex mobile manipulator is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform consists of a 6°-of-freedom (DOF) industrial manipulator, 3D-printed parallel gripper, and mobile base, which can be controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. This work evaluates the impact of depth perception and immersion provided by integrated 3D/2D vision and motion mapping schemes on teleoperation efficiency and user experience in an MR environment. In particular, the MR enhanced systems maintain spatial awareness and perceptual salience of the remote scene in 3D, facilitating intuitive mixed reality human-robot interaction (MR-HRI). This study compared two MR-integrated 3D/2D vision and motion mapping schemes against a typical 2D Baseline visual display method through pick-and-place, assembly, and dexterous manufacturing tasks. The MR-integrated 3D/2D vision and motion mapping schemes of teleoperation reduced overall task completion times by 34% and 17%, compared to the MR-2D Baseline, while minimizing training effort and cognitive workload.



中文翻译:

用于移动机械手直观远程操作的混合现实集成 3D/2D 视觉映射

在远程引导复杂机器人系统时,深度提示对于提高用户对远程环境的感知和空间意识至关重要。提出了一种混合现实 (MR) 集成的 3D/2D 视觉和运动映射框架,用于对复杂的移动机械手进行沉浸式和直观的远程操作。所提出的 3D 沉浸式远程机器人方案通过 MR 子空间合并远程环境的多个 3D/2D 视图,为用户提供深度感知。移动机械手平台由 6°自由度 (DOF) 工业机械手、3D 打印平行夹持器和移动底座组成,可由非熟练操作员控制,这些操作员可通过速度与机器人工作空间物理分离-基于模仿运动映射方法。这项工作评估了集成 3D/2D 视觉和运动映射方案提供的深度感知和沉浸感对 MR 环境中的远程操作效率和用户体验的影响。特别是,MR 增强系统在 3D 中保持远程场景的空间意识和感知显着性,促进直观的混合现实人机交互 (MR-HRI)。这项研究通过拾取和放置、组装和灵巧的制造任务,将两种集成 MR 的 3D/2D 视觉和运动映射方案与典型的 2D Baseline 视觉显示方法进行了比较。与 MR-2D Baseline 相比,MR 集成的 3D/2D 视觉和远程操作运动映射方案将总体任务完成时间减少了 34% 和 17%,同时最大限度地减少了培训工作和认知工作量。

更新日期:2022-04-07
down
wechat
bug