Skip to main content
Log in

Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

The surgical instrument tracking framework, especially the marker-free surgical instrument tracking framework, is the key to visual servoing which is applied to achieve active control for laparoscope-holder robots. This paper presented a marker-free surgical instrument tracking framework based on object extraction via deep learning (DL).

Methods

The surgical instrument joint was defined as the tracking point. Using DL, a segmentation model was trained to extract the end-effector and shaft portions of the surgical instrument in real time. The extracted object was transformed into a distance image by Euclidean Distance Transformation. Next, the points with the maximal pixel value in the two portions were defined as their central points, respectively, and the intersection point of the line connecting the two central points and the plane connecting the two portions was determined as the tracking point. Finally, the object could be fast extracted using the masking method, and the tracking point was fast located frame-by-frame in a laparoscopic video to achieve tracking of surgical instrument. The proposed object extraction via a DL-based marker-free tracking framework was compared with a marker-free tracking-by-detection framework via DL.

Results

Using seven in vivo laparoscopic videos for experiments, the mean tracking success rate was 100%. The mean tracking accuracy was (3.9 ± 2.4, 4.0 ± 2.5) pixels measured in u and v coordinates of a frame, and the mean tracking speed was 15 fps. Compared to the reported mean tracking accuracy of a marker-free tracking-by-detection framework via DL, the mean tracking accuracy of our proposed tracking framework was improved by 37% and 23%, respectively.

Conclusion

Accurate and fast tracking of marker-free surgical instruments could be achieved in in vivo laparoscopic videos by using our proposed object extraction via DL-based marker-free tracking framework. This work provided important guiding significance for the application of laparoscope-holder robots in laparoscopic surgeries.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Darzi A, Mackay S (2002) Recent advances in minimal access surgery. BMJ 324(7328):31–34

    Article  Google Scholar 

  2. Amini Khoiy K, Mirbagheri A, Farahmand F (2016) Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot. Minim Invasive Ther Allied Technol 25(3):121–128

    Article  Google Scholar 

  3. Climent J, Mares P (2004) Automatic instrument localization in laparoscopic surgery. ELCVIA Electron Lett Comput Vis Image Anal 4(1):21–31

    Article  Google Scholar 

  4. Voros S, Long J-A, Cinquin P (2007) Automatic detection of instruments in laparoscopic images: a first step towards high-level command of robotic endoscopic holders. Int J Robot Res 26(11–12):1173–1190

    Article  Google Scholar 

  5. Wei GQ, Arbter K, Hirzinger G (1997) Real-time visual servoing for laparoscopic surgery: controlling robot motion with color image segmentation. IEEE Eng Med Biol Mag 16(1):40–45

    Article  CAS  Google Scholar 

  6. Wijsman PJ, Broeders IA, Brenkman HJ, Szold A, Forgione A, Schreuder HW, Consten EC, Draaisma WA, Verheijen PM, Ruurda JP (2018) First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device. Surg Endosc 32(5):2560–2566

    Article  Google Scholar 

  7. Herman B, Duy KT, Dehez B, Polet R, Raucent B, Dombre E, Donnez J (2009) Development and first in vivo trial of EvoLap, an active laparoscope positioner. J Min Invasive Gynecol 16(3):344–349

    Article  Google Scholar 

  8. Mirbagheri A, Farahmand F, Meghdari A, Karimian F (2011) Design and development of an effective low-cost robotic cameraman for laparoscopic surgery: RoboLens. Sci Iran 18(1):105–114

    Article  Google Scholar 

  9. Reichenspurner H, Damiano RJ, Mack M, Boehm DH, Gulbins H, Detter C, Meiser B, Ellgass R, Reichart B (1999) Use of the voice-controlled and computer-assisted surgical system ZEUS for endoscopic coronary artery bypass grafting. J Thorac Cardiovasc Surg 118(1):11–16

    Article  CAS  Google Scholar 

  10. Sun Y, Pan B, Fu Y, Cao F (2019) Development of a novel intelligent laparoscope system for semi-automatic minimally invasive surgery. Int J Med Robot Comput Assist Surg 15:96

    Google Scholar 

  11. Azizian M, Khoshnam M, Najmaei N, Patel RV (2014) Visual servoing in medical robotics: a survey. Part I. Endoscopic and direct vision imaging–techniques and applications. Int J Med Robot Comput Assist Surg 10(3):263–274

    Article  Google Scholar 

  12. Hutchinson S, Hager GD, Corke PI (1996) A tutorial on visual servo control. IEEE Trans Robot Autom 12(5):651–670

    Article  Google Scholar 

  13. Uecker DR, Wang Y, Lee C, Wang Y (1995) Automated instrument tracking in robotically assisted laparoscopic surgery. J Image Guided Surg 1(6):308–325

    Article  CAS  Google Scholar 

  14. Ko S-Y, Kim J, Kwon D-S, Lee W-J (2005) Intelligent interaction between surgeon and laparoscopic assistant robot system. In: ROMAN 2005. IEEE international workshop on robot and human interactive communication, 2005. IEEE, pp 60–65

  15. Amini Khoiy K, Mirbagheri A, Farahmand F, Bagheri S Marker-free detection of instruments in laparoscopic images to control a cameraman robot. In: ASME 2010 international design engineering technical conferences and computers and information in engineering conference, 2011. American Society of Mechanical Engineers Digital Collection, pp 477–482

  16. Song S, Li Z, Yu H, Ren H (2015) Shape reconstruction for wire-driven flexible robots based on Bézier curve and electromagnetic positioning. Mechatronics 29:28–35

    Article  Google Scholar 

  17. Ren H, Vasilyev NV, Dupont PE (2011) Detection of curved robots using 3D ultrasound. In: 2011 IEEE/RSJ international conference on intelligent robots and systems, IEEE, pp 2083–2089

  18. Omote K, Feussner H, Ungeheuer A, Arbter K, Wei G-Q, Siewert JR, Hirzinger G (1999) Self-guided robotic camera control for laparoscopic surgery compared with human camera control. Am J Surg 177(4):321–324

    Article  CAS  Google Scholar 

  19. Reiter A, Allen PK, Zhao T (2012) Feature classification for tracking articulated surgical tools. In: International conference on medical image computing and computer-assisted intervention, Springer, pp 592–600

  20. Allan M, Thompson S, Clarkson MJ, Ourselin S, Hawkes DJ, Kelly J, Stoyanov D (2014) 2D-3D pose tracking of rigid instruments in minimally invasive surgery. In: International conference on information processing in computer-assisted interventions, Springer, pp 1–10

  21. Du X, Allan M, Dore A, Ourselin S, Hawkes D, Kelly JD, Stoyanov D (2016) Combined 2D and 3D tracking of surgical instruments for minimally invasive and robotic-assisted surgery. Int J Comput Assist Radiol Surg 11(6):1109–1119

    Article  Google Scholar 

  22. Wesierski D, Wojdyga G, Jezierska A (2015) Instrument tracking with rigid part mixtures model. Computer-assisted and robotic endoscopy. Springer, New York, pp 22–34

    Google Scholar 

  23. Islam M, Atputharuban DA, Ramesh R, Ren H (2019) Real-time instrument segmentation in robotic surgery using auxiliary supervised deep adversarial learning. IEEE Robot Autom Lett 4(2):2188–2195

    Article  Google Scholar 

  24. Pakhomov D, Premachandran V, Allan M, Azizian M, Navab N (2017) Deep residual learning for instrument segmentation in robotic surgery. ArXiv preprint arXiv:170308580

  25. Attia M, Hossny M, Nahavandi S, Asadi H (2017) Surgical tool segmentation using a hybrid deep CNN-RNN auto encoder-decoder. In: 2017 IEEE international conference on systems, man, and cybernetics (SMC), IEEE, pp 3373–3378

  26. García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2016) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: International workshop on computer-assisted and robotic endoscopy, Springer, pp 84–95

  27. García-Peraza-Herrera LC, Li W, Fidon L, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T (2017) Toolnet: holistically-nested real-time segmentation of robotic surgical tools. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 5717–5722

  28. Laina I, Rieke N, Rupprecht C, Vizcaíno JP, Eslami A, Tombari F, Navab N (2017) Concurrent segmentation and localization for tracking of surgical instruments. In: International conference on medical image computing and computer-assisted intervention, Springer, pp 664–672

  29. Zhao Z, Voros S, Weng Y, Chang F, Li R (2017) Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method. Comput Assist Surg 22(sup1):26–35

    Article  Google Scholar 

  30. Paszke A, Chaurasia A, Kim S, Culurciello E (2016) Enet: a deep neural network architecture for real-time semantic segmentation. ArXiv preprint arXiv:160602147

  31. Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: 2018 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 691–699

  32. Chaurasia A, Culurciello E (2017) Linknet: Exploiting encoder representations for efficient semantic segmentation. In: 2017 IEEE visual communications and image processing (VCIP), IEEE, pp 1–4

  33. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: international conference on medical image computing and computer-assisted intervention, Springer, pp 234–241

  34. Jian J, Xiong F, Xia W, Zhang R, Gu J, Wu X, Meng X, Gao X (2018) Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images. Austral Phys Eng Sci Med 41(2):393–401

    Article  Google Scholar 

  35. Shvets AA, Rakhlin A, Kalinin AA, Iglovikov VI (2018) Automatic instrument segmentation in robot-assisted surgery using deep learning. In: 2018 17th IEEE international conference on machine learning and applications (ICMLA), IEEE, pp 624–628

  36. Steiner B, Devito Z, Chintala S, Gross S, Paszke A, Massa F, Lerer A, Chanan G, Lin Z, Yang E (2019) PyTorch: an imperative style, high- performance deep learning library. In: Neural information processing systems, pp 8026–8037

  37. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. ArXiv preprint arXiv:14126980

Download references

Acknowledgements

This study was funded by the Science and Technology Plan Projects of Tianjin (No. 19YDYGHZ00030), Science and Technology Plan Projects of Jiangsu (No. BE2019665), Chinese Academy of Sciences-Iranian Vice Presidency for Science and Technology Silk Road Science Fund (No. GJHZ1857), and the Science and Technology Plan Projects of Jiangsu (No. BE2017671).

Funding

This study was funded by the Science and Technology Plan Projects of Tianjin (No. 19YDYGHZ00030), Science and Technology Plan Projects of Jiangsu (No. BE2019665), Chinese Academy of Sciences-Iranian Vice Presidency for Science and Technology Silk Road Science Fund (No. GJHZ1857), Science and Technology Plan Projects of Jiangsu (No. BE2017671).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Gao.

Ethics declarations

Conflict of interest

The authors declared no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent to participate

This article does not contain patient data.

Consent for publication

All authors consented for publication.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Gao, X. Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots. Int J CARS 15, 1335–1345 (2020). https://doi.org/10.1007/s11548-020-02214-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-020-02214-y

Keywords

Navigation