当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards Augmented Reality-based Suturing in Monocular Laparoscopic Training
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-01-19 , DOI: arxiv-2001.06894
Chandrakanth Jayachandran Preetha, Jonathan Kloss, Fabian Siegfried Wehrtmann, Lalith Sharan, Carolyn Fan, Beat Peter M\"uller-Stich, Felix Nickel, Sandy Engelhardt

Minimally Invasive Surgery (MIS) techniques have gained rapid popularity among surgeons since they offer significant clinical benefits including reduced recovery time and diminished post-operative adverse effects. However, conventional endoscopic systems output monocular video which compromises depth perception, spatial orientation and field of view. Suturing is one of the most complex tasks performed under these circumstances. Key components of this tasks are the interplay between needle holder and the surgical needle. Reliable 3D localization of needle and instruments in real time could be used to augment the scene with additional parameters that describe their quantitative geometric relation, e.g. the relation between the estimated needle plane and its rotation center and the instrument. This could contribute towards standardization and training of basic skills and operative techniques, enhance overall surgical performance, and reduce the risk of complications. The paper proposes an Augmented Reality environment with quantitative and qualitative visual representations to enhance laparoscopic training outcomes performed on a silicone pad. This is enabled by a multi-task supervised deep neural network which performs multi-class segmentation and depth map prediction. Scarcity of labels has been conquered by creating a virtual environment which resembles the surgical training scenario to generate dense depth maps and segmentation maps. The proposed convolutional neural network was tested on real surgical training scenarios and showed to be robust to occlusion of the needle. The network achieves a dice score of 0.67 for surgical needle segmentation, 0.81 for needle holder instrument segmentation and a mean absolute error of 6.5 mm for depth estimation.

中文翻译:

在单眼腹腔镜训练中实现基于增强现实的缝合

微创手术 (MIS) 技术在外科医生中迅速普及,因为它们具有显着的临床益处,包括缩短恢复时间和减少术后不良反应。然而,传统的内窥镜系统输出的单眼视频会影响深度感知、空间方向和视野。缝合是在这些情况下执行的最复杂的任务之一。这项任务的关键组成部分是持针器和手术针之间的相互作用。针和仪器的实时可靠 3D 定位可用于通过描述其定量几何关系的附加参数来增强场景,例如估计的针平面与其旋转中心和仪器之间的关系。这可能有助于基本技能和手术技术的标准化和培训,提高整体手术性能,并降低并发症的风险。该论文提出了一种具有定量和定性视觉表示的增强现实环境,以增强在硅胶垫上进行的腹腔镜训练结果。这是由执行多类分割和深度图预测的多任务监督深度神经网络实现的。通过创建一个类似于手术训练场景的虚拟环境来生成密集的深度图和分割图,标签的稀缺性已经被克服。所提出的卷积神经网络在真实的手术训练场景中进行了测试,并显示出对针头阻塞的鲁棒性。该网络的骰子得分为 0。
更新日期:2020-01-22
down
wechat
bug