当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep learning-based liver segmentation for fusion-guided intervention.
International Journal of Computer Assisted Radiology and Surgery ( IF 3 ) Pub Date : 2020-04-21 , DOI: 10.1007/s11548-020-02147-6
Xi Fang 1 , Sheng Xu 2 , Bradford J Wood 2 , Pingkun Yan 1
Affiliation  

PURPOSE Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors. METHODS The developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation. RESULTS The segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases. CONCLUSION Our study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications.

中文翻译:

基于深度学习的肝分割,用于融合引导干预。

目的肿瘤通常具有不同的成像特性,并且没有可以使所有肿瘤可视化的单一成像方式。在CT引导下的针头放置程序中,当肿瘤在CT中不直接可见时,通常将图像融合(例如与MRI,PET或对比CT)作为图像引导。为了实现图像融合,需要将介入式CT图像配准到可见肿瘤的成像方式。但是,多模式图像配准是一个非常具有挑战性的问题。在这项工作中,我们开发了一种基于深度学习的肝脏分割算法,并使用分割后的表面协助图像融合以及在引导针放置程序中的应用,以诊断和治疗肝脏肿瘤。方法所开发的分割方法在一个网络中集成了多尺度输入和多尺度输出功能,用于上下文信息抽象。自动分割结果用于将介入式CT与诊断图像对齐。配准有助于可视化目标并指导介入操作。结果分割结果表明,所开发的分割方法在LiTS挑战提供的70次CT扫描中具有96.1%的Dice精确度。然后将分割算法应用于为肝肿瘤干预而获取的一组图像,以进行基于表面的图像融合。通过许多临床病例证明了所提出方法的有效性。结论我们的研究表明,基于深度学习的图像分割可以获得有用的结果,有助于图像融合以进行介入指导。这样的技术可能导致许多其他潜在的应用。
更新日期:2020-04-21
down
wechat
bug