当前位置: X-MOL 学术Int. J. Adv. Robot. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A three-dimensional mapping and virtual reality-based human–robot interaction for collaborative space exploration
International Journal of Advanced Robotic Systems ( IF 2.3 ) Pub Date : 2020-05-01 , DOI: 10.1177/1729881420925293
Junhao Xiao 1, 2 , Pan Wang 3 , Huimin Lu 1 , Hui Zhang 1
Affiliation  

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.

中文翻译:

用于协作空间探索的三维映射和基于虚拟现实的人机交互

人机交互是人机协同空间探索的重要组成部分,它将人类的高级决策和路径规划智能与机器人的精确感知和建模能力联系起来。然而,大多数传统的人机交互方法依赖于视频流让操作员了解机器人的周围环境,这缺乏情境意识并迫使操作员感到压力和疲劳。这项研究旨在提高效率并促进人机协作的自然交互水平。我们提出了一种基于实时映射和在线虚拟现实可视化的人机交互方法,该方法已为救援机器人技术实施和验证。在机器人端,通过LiDAR-IMU紧密融合实时构建密集点云图;得到的图被进一步变换为三维正态分布变换表示。采用无线通信方式将三维正态分布变换图以增量方式传输至遥控站。在远程控制站,接收到的地图使用参数化椭球单元在虚拟现实中呈现。操作员通过三种模式控制机器人。在复杂区域,操作员可以使用交互设备发出低级运动命令。在非结构化程度较低的区域,操作员可以指定路径甚至目标点。之后,机器人跟随路径或自主导航到目标点。换句话说,这两种模式更依赖于机器人的自主性。凭借虚拟现实可视化,操作者可以对要探索的空间有更全面的了解。在这种情况下,人类的高级决策和路径规划智能与机器人的精确感知和建模能力可以很好地融合为一体。虽然该方法是为救援机器人提出的,但它也可以用于其他基于视距遥操作的人机协作系统,包括但不限于制造、太空、海底、外科、农业和军事行动。
更新日期:2020-05-01
down
wechat
bug