Skip to main content
Log in

Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images

  • Short communication
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

The main purpose of this study is to construct a system to track the tumor position during radiofrequency ablation (RFA) treatment. Existing tumor tracking systems are designed to track a tumor in a two-dimensional (2D) ultrasound (US) image. As a result, the three-dimensional (3D) motion of the organs cannot be accommodated and the ablation area may be lost. In this study, we propose a method for estimating the 3D movement of the liver as a preliminary system for tumor tracking. Additionally, in current 3D movement estimation systems, the motion of different structures during RFA could reduce the tumor visibility in US images. Therefore, we also aim to improve the estimation of the 3D movement of the liver by improving the liver segmentation. We propose a novel approach to estimate the relative 6-axial movement (x, y, z, roll, pitch, and yaw) between the liver and the US probe in order to estimate the overall movement of the liver.

Method

We used a convolutional neural network (CNN) to estimate the 3D displacement from two-dimensional US images. In addition, to improve the accuracy of the estimation, we introduced a segmentation map of the liver region as the input for the regression network. Specifically, we improved the extraction accuracy of the liver region by using a bi-directional convolutional LSTM U-Net with densely connected convolutions (BCDU-Net).

Results

By using BCDU-Net, the accuracy of the segmentation was dramatically improved, and as a result, the accuracy of the movement estimation was also improved. The mean absolute error for the out-of-plane direction was 0.0645 mm/frame.

Conclusion

The experimental results show the effectiveness of our novel method to identify the movement of the liver by BCDU-Net and CNN. Precise segmentation of the liver by BCDU-Net also contributes to enhancing the performance of the liver movement estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

References

  1. Kondo R, Koizumi N, Tomita K, Nishiyama Y, Sakanashi H, Fukuda H, Tsukihara H, Numata K, Mitusishi M, Matsumoto Y (2017) An automatic templates selection method for ultrasound guided tumor tracking. In: Proceedings of 2017 14th international conference on ubiquitous robots and ambient intelligence (URAI 2017), pp 587–588

  2. Carletti M, Alba DD, Cristani M, Fiorini P (2016) A robust particle filtering approach with spatially-dependent template selection for medical ultrasound tracking applications. In: VISIGRAPP (3: VISAPP), pp 524–533

  3. Prevost R, Salehi M, Sprung J, Ladikos A, Bauer R, Weiness W (2017) Deep learning for sensorless3D freehand ultrasound imaging. In: International conference on medical image computing and computer-assisted intervention, Springer, Cham, pp 628–636

  4. Kondo R, Koizumi N, Nishiyama Y, Matsumoto N, Numata K (2018) Out-of-plane motion detection system using convolutional neural network for US-guided radiofrequency ablation therapy. In: Proceedings of 2018 15th international conference on ubiquitous robots (UR2018), pp 735–737

  5. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, Springer, Cham, pp 234–241

  6. Wang Y, Yu L, Wang S (2019) Segmentation guided regression network for breast cancer cellularity. In: Chinese conference on pattern recognition and computer vision (PRCV), Springer, Cham, pp 150–160

  7. Azad R, Asadi-Aghbolaghi M, Fathy M, Escalera S (2019) Bi-directional ConvLSTM U-Net with densley connected convolutions. In: Proceedings of the IEEE international conference on computer vision workshops. https://arxiv.org/abs/1909.00166

  8. Song H, Wang W, Zhao S, Shen J, Lam KM (2018) Pyramid dilated deeper convlstm for video salient object detection. In: Proceedings of the European conference on computer vision (ECCV), pp 715–731

  9. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  10. Feichtenhofer C, Fan H, Malik J, He K (2019) Slowfast networks for video recognition. In: Proceedings of the IEEE international conference on computer vision, pp 6202–6211

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Norihiro Koizumi.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical standards

This study did not use any human participants and no animal experiments were performed by any of the authors.

Informed consent

This article does not contain patient data.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yagasaki, S., Koizumi, N., Nishiyama, Y. et al. Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images. Int J CARS 15, 1989–1995 (2020). https://doi.org/10.1007/s11548-020-02265-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-020-02265-1

Keywords

Navigation