当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots.
International Journal of Computer Assisted Radiology and Surgery ( IF 3 ) Pub Date : 2020-06-24 , DOI: 10.1007/s11548-020-02214-y
Jiayi Zhang 1 , Xin Gao 1, 2
Affiliation  

Purpose

The surgical instrument tracking framework, especially the marker-free surgical instrument tracking framework, is the key to visual servoing which is applied to achieve active control for laparoscope-holder robots. This paper presented a marker-free surgical instrument tracking framework based on object extraction via deep learning (DL).

Methods

The surgical instrument joint was defined as the tracking point. Using DL, a segmentation model was trained to extract the end-effector and shaft portions of the surgical instrument in real time. The extracted object was transformed into a distance image by Euclidean Distance Transformation. Next, the points with the maximal pixel value in the two portions were defined as their central points, respectively, and the intersection point of the line connecting the two central points and the plane connecting the two portions was determined as the tracking point. Finally, the object could be fast extracted using the masking method, and the tracking point was fast located frame-by-frame in a laparoscopic video to achieve tracking of surgical instrument. The proposed object extraction via a DL-based marker-free tracking framework was compared with a marker-free tracking-by-detection framework via DL.

Results

Using seven in vivo laparoscopic videos for experiments, the mean tracking success rate was 100%. The mean tracking accuracy was (3.9 ± 2.4, 4.0 ± 2.5) pixels measured in u and v coordinates of a frame, and the mean tracking speed was 15 fps. Compared to the reported mean tracking accuracy of a marker-free tracking-by-detection framework via DL, the mean tracking accuracy of our proposed tracking framework was improved by 37% and 23%, respectively.

Conclusion

Accurate and fast tracking of marker-free surgical instruments could be achieved in in vivo laparoscopic videos by using our proposed object extraction via DL-based marker-free tracking framework. This work provided important guiding significance for the application of laparoscope-holder robots in laparoscopic surgeries.



中文翻译:

通过基于深度学习的腹腔镜支架机器人手术器械的无标记跟踪框架进行对象提取。

目的

手术器械跟踪框架,尤其是无标记的手术器械跟踪框架,是视觉伺服的关键,它可用于实现腹腔镜支架机器人的主动控制。本文提出了一种基于对象通过深度学习(DL)提取的无标记手术器械跟踪框架。

方法

手术器械关节被定义为跟踪点。使用DL,训练了一个分割模型以实时提取手术器械的末端执行器和轴部分。通过欧几里得距离变换将提取的对象变换为距离图像。接下来,将在两个部分中具有最大像素值的点分别定义为它们的中心点,并且将连接两个中心点的线与连接两个部分的平面的交点确定为跟踪点。最后,可以使用遮罩方法快速提取对象,并在腹腔镜视频中逐帧快速定位跟踪点,以实现对手术器械的跟踪。

结果

使用七个体内腹腔镜视频进行实验,平均追踪成功率为100%。在帧的uv坐标中测得的平均跟踪精度为(3.9±2.4,4.0±2.5)像素,平均跟踪速度为15 fps。与报道的通过DL进行无标记检测跟踪的平均跟踪准确性相比,我们提出的跟踪框架的平均跟踪准确性分别提高了37%和23%。

结论

通过使用我们建议的基于基于DL的无标记跟踪框架的对象提取功能,可以在体内腹腔镜视频中实现无标记手术器械的准确和快速跟踪。这项工作为腹腔镜手术机器人在腹腔镜手术中的应用提供了重要的指导意义。

更新日期:2020-06-24
down
wechat
bug