当前位置: X-MOL 学术IEEE Trans. Ind. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Intelligent Intraoperative Haptic-AR Navigation for COVID-19 Lung Biopsy Using Deep Hybrid Model
IEEE Transactions on Industrial Informatics ( IF 12.3 ) Pub Date : 2021-01-19 , DOI: 10.1109/tii.2021.3052788
Yonghang Tai 1 , Kai Qian 2 , Xiaoqiao Huang 1 , Jun Zhang 1 , Mian Ahmad Jan 3 , Zhengtao Yu 4
Affiliation  

A novel intelligent navigation technique for accurate image-guided COVID-19 lung biopsy is addressed, which systematically combines augmented reality (AR), customized haptic-enabled surgical tools, and deep neural network to achieve customized surgical navigation. Clinic data from 341 COVID-19 positive patients, with 1598 negative control group, have collected for the model synergy and evaluation. Biomechanics force data from the experiment are applied a WPD-CNN-LSTM (WCL) to learn a new patient-specific COVID-19 surgical model, and the ResNet was employed for the intraoperative force classification. To boost the user immersion and promote the user experience, intro-operational guiding images have combined with the haptic-AR navigational view. Furthermore, a 3-D user interface (3DUI), including all requisite surgical details, was developed with a real-time response guaranteed. Twenty-four thoracic surgeons were invited to the objective and subjective experiments for performance evaluation. The root-mean-square error results of our proposed WCL model is 0.0128, and the classification accuracy is 97%, which demonstrated that the innovative AR with deep learning (DL) intelligent model outperforms the existing perception navigation techniques with significantly higher performance. This article shows a novel framework in the interventional surgical integration for COVID-19 and opens the new research about the integration of AR, haptic rendering, and deep learning for surgical navigation.

中文翻译:

使用深度混合模型进行 COVID-19 肺活检的智能术中触觉 AR 导航

提出了一种用于精确图像引导的 COVID-19 肺活检的新型智能导航技术,该技术系统地结合了增强现实 (AR)、定制的触觉手术工具和深度神经网络,以实现定制的手术导航。收集了 341 名 COVID-19 阳性患者和 1598 名阴性对照组的临床数据,用于模型协同和评估。实验中的生物力学力数据应用 WPD-CNN-LSTM (WCL) 来学习新的患者特异性 COVID-19 手术模型,并采用 ResNet 进行术中力分类。为了提高用户沉浸感并提升用户体验,操作引导图像与触觉 AR 导航视图相结合。此外,还开发了 3D 用户界面 (3DUI),包括所有必要的手术细节,并保证实时响应。邀请24名胸外科医生进行客观和主观实验以进行绩效评估。我们提出的WCL模型的均方根误差结果为0.0128,分类准确率为97%,这表明具有深度学习(DL)智能模型的创新AR以明显更高的性能优于现有的感知导航技术。本文展示了针对 COVID-19 的介入手术集成的新颖框架,并开启了有关 AR、触觉渲染和手术导航深度学习集成的新研究。
更新日期:2021-01-19
down
wechat
bug