当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Data-driven Holistic Framework for Automated Laparoscope Optimal View Control with Learning-based Depth Perception
arXiv - CS - Robotics Pub Date : 2020-11-23 , DOI: arxiv-2011.11241
Bin Li, Bo Lu, Yiang Lu, Qi Dou, Yun-Hui Liu

Laparoscopic Field of View (FOV) control is one of the most fundamental and important components in Minimally Invasive Surgery (MIS), nevertheless, the traditional manual holding paradigm may easily bring fatigue to surgical assistants, and misunderstanding between surgeons also hinders assistants to provide a high-quality FOV. Targeting this problem, we here present a data-driven framework to realize an automated laparoscopic optimal FOV control. To achieve this goal, we offline learn a motion strategy of laparoscope relative to the surgeon's hand-held surgical tool from our in-house surgical videos, developing our control domain knowledge and an optimal view generator. To adjust the laparoscope online, we first adopt a learning-based method to segment the two-dimensional (2D) position of the surgical tool, and further leverage this outcome to obtain its scale-aware depth from dense depth estimation results calculated by our novel unsupervised RoboDepth model only with the monocular camera feedback, hence in return fusing the above real-time 3D position into our control loop. To eliminate the misorientation of FOV caused by Remote Center of Motion (RCM) constraints when moving the laparoscope, we propose a novel distortion constraint using an affine map to minimize the visual warping problem, and a null-space controller is also embedded into the framework to optimize all types of errors in a unified and decoupled manner. Experiments are conducted using Universal Robot (UR) and Karl Storz Laparoscope/Instruments, which prove the feasibility of our domain knowledge and learning enabled framework for automated camera control.

中文翻译:

基于学习深度感知的自动腹腔镜最佳视图控制的数据驱动整体框架

腹腔镜视野(FOV)控制是微创手术(MIS)中最基本和最重要的组成部分之一,尽管如此,传统的手动握持模式可能很容易给手术助手带来疲劳,并且外科医生之间的误解也阻碍了助手提供辅助治疗。高质量的FOV。针对这个问题,我们在这里提出一种数据驱动的框架,以实现自动腹腔镜最佳FOV控制。为了实现这一目标,我们从内部手术视频中离线学习了腹腔镜相对于外科医生的手持手术工具的运动策略,发展了我们的控制领域知识和最佳的视图生成器。要在线调整腹腔镜,我们首先采用基于学习的方法来分割手术工具的二维(2D)位置,并进一步利用这一结果,仅通过单眼相机反馈,从我们新颖的无监督RoboDepth模型所计算出的密集深度估计结果中获得其尺度感知的深度,从而将上述实时3D位置融合到我们的控制环中。为了消除在移动腹腔镜时由远程运动中心(RCM)约束引起的FOV取向错误,我们提出了一种使用仿射图的新型畸变约束,以最大程度地减少视觉扭曲问题,并且在框架中还嵌入了空空间控制器以统一和分离的方式优化所有类型的错误。实验是使用Universal Robot(UR)和Karl Storz Laparoscope / Instruments进行的,证明了我们的领域知识和支持学习的框架用于自动摄像机控制的可行性。
更新日期:2020-11-25
down
wechat
bug