当前位置: X-MOL 学术Trans. Inst. Meas. Control › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal data fusion framework enhanced robot-assisted minimally invasive surgery
Transactions of the Institute of Measurement and Control ( IF 1.7 ) Pub Date : 2021-01-18 , DOI: 10.1177/0142331220984350
Wen Qi 1 , Hang Su 1 , Ke Fan 1 , Ziyang Chen 1 , Jiehao Li 1 , Xuanyi Zhou 1 , Yingbai Hu 2 , Longbin Zhang 3 , Giancarlo Ferrigno 1 , Elena De Momi 1
Affiliation  

The generous application of robot-assisted minimally invasive surgery (RAMIS) promotes human-machine interaction (HMI). Identifying various behaviors of doctors can enhance the RAMIS procedure for the redundant robot. It bridges intelligent robot control and activity recognition strategies in the operating room, including hand gestures and human activities. In this paper, to enhance identification in a dynamic situation, we propose a multimodal data fusion framework to provide multiple information for accuracy enhancement. Firstly, a multi-sensors based hardware structure is designed to capture varied data from various devices, including depth camera and smartphone. Furthermore, in different surgical tasks, the robot control mechanism can shift automatically. The experimental results evaluate the efficiency of developing the multimodal framework for RAMIS by comparing it with a single sensor system. Implementing the KUKA LWR4+ in a surgical robot environment indicates that the surgical robot systems can work with medical staff in the future.



中文翻译:

多模式数据融合框架增强了机器人辅助的微创手术

机器人辅助微创手术(RAMIS)的大量应用促进了人机交互(HMI)。识别医生的各种行为可以增强冗余机器人的RAMIS程序。它在手术室中桥接了智能机器人控制和活动识别策略,包括手势和人类活动。在本文中,为了增强动态情况下的识别能力,我们提出了一种多模式数据融合框架,该框架可提供多个信息以提高准确性。首先,基于多传感器的硬件结构被设计为捕获来自各种设备(包括深度相机和智能手机)的各种数据。此外,在不同的手术任务中,机器人控制机构可以自动移动。实验结果通过与单传感器系统进行比较,评估了开发RAMIS多模式框架的效率。在外科手术机器人环境中实施KUKA LWR4 +表示外科手术机器人系统将来可以与医务人员一起使用。

更新日期:2021-01-18
down
wechat
bug