当前位置: X-MOL 学术J. Digit. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Weak and Semi-supervised Segmentation Method for Prostate Cancer in TRUS Images.
Journal of Digital Imaging ( IF 4.4 ) Pub Date : 2020-02-10 , DOI: 10.1007/s10278-020-00323-3
Seokmin Han 1 , Sung Il Hwang 2 , Hak Jong Lee 3, 4
Affiliation  

The purpose of this research is to exploit a weak and semi-supervised deep learning framework to segment prostate cancer in TRUS images, alleviating the time-consuming work of radiologists to draw the boundary of the lesions and training the neural network on the data that do not have complete annotations. A histologic-proven benchmarking dataset of 102 case images was built and 22 images were randomly selected for evaluation. Some portion of the training images were strong supervised, annotated pixel by pixel. Using the strong supervised images, a deep learning neural network was trained. The rest of the training images with only weak supervision, which is just the location of the lesion, were fed to the trained network to produce the intermediate pixelwise labels for the weak supervised images. Then, we retrained the neural network on the all training images with the original labels and the intermediate labels and fed the training images to the retrained network to produce the refined labels. Comparing the distance of the center of mass of the refined labels and the intermediate labels to the weak supervision location, the closer one replaced the previous label, which could be considered as the label updates. After the label updates, test set images were fed to the retrained network for evaluation. The proposed method shows better result with weak and semi-supervised data than the method using only small portion of strong supervised data, although the improvement may not be as much as when the fully strong supervised dataset is used. In terms of mean intersection over union (mIoU), the proposed method reached about 0.6 when the ratio of the strong supervised data was 40%, about 2% decreased performance compared to that of 100% strong supervised case. The proposed method seems to be able to help to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train the neural network on the data that do not have complete annotations.

中文翻译:

TRUS 图像中前列腺癌的弱半监督分割方法。

本研究的目的是利用弱和半监督的深度学习框架来分割 TRUS 图像中的前列腺癌,减轻放射科医生绘制病变边界的耗时工作,并根据数据训练神经网络。没有完整的注释。建立了一个由 102 个病例图像组成的组织学证明的基准数据集,并随机选择了 22 个图像进行评估。训练图像的某些部分是强监督的,逐像素注释。使用强监督图像,训练深度学习神经网络。其余只有弱监督的训练图像,也就是病变的位置,被馈送到训练网络,为弱监督图像生成中间像素标签。然后,我们使用原始标签和中间标签在所有训练图像上重新训练神经网络,并将训练图像馈送到重新训练的网络以生成细化标签。比较细化标签和中间标签的质心到弱监督位置的距离,越近的标签取代了之前的标签,可以认为是标签更新。标签更新后,测试集图像被馈送到重新训练的网络进行评估。与仅使用一小部分强监督数据的方法相比,所提出的方法在弱和半监督数据下显示出更好的结果,尽管改进可能不如使用完全强监督数据集时那么多。在联合的平均交集(mIoU)方面,所提出的方法达到了大约 0。6 当强监督数据的比例为 40% 时,与 100% 强监督的情况相比,性能下降了约 2%。所提出的方法似乎能够帮助减轻放射科医生绘制病变边界的耗时工作,并在没有完整注释的数据上训练神经网络。
更新日期:2020-03-07
down
wechat
bug