当前位置: X-MOL 学术Comput. Methods Programs Biomed. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using deep learning to generate synthetic B-mode musculoskeletal ultrasound images.
Computer Methods and Programs in Biomedicine ( IF 6.1 ) Pub Date : 2020-06-04 , DOI: 10.1016/j.cmpb.2020.105583
Neil J Cronin 1 , Taija Finni 2 , Olivier Seynnes 3
Affiliation  

Background and objective

Deep learning approaches are common in image processing, but often rely on supervised learning, which requires a large volume of training images, usually accompanied by hand-crafted labels. As labelled data are often not available, it would be desirable to develop methods that allow such data to be compiled automatically. In this study, we used a Generative Adversarial Network (GAN) to generate realistic B-mode musculoskeletal ultrasound images, and tested the suitability of two automated labelling approaches.

Methods

We used a model including two GANs each trained to transfer an image from one domain to another. The two inputs were a set of 100 longitudinal images of the gastrocnemius medialis muscle, and a set of 100 synthetic segmented masks that featured two aponeuroses and a random number of ‘fascicles’. The model output a set of synthetic ultrasound images and an automated segmentation of each real input image. This automated segmentation process was one of the two approaches we assessed. The second approach involved synthesising ultrasound images and then feeding these images into an ImageJ/Fiji-based automated algorithm, to determine whether it could detect the aponeuroses and muscle fascicles.

Results

Histogram distributions were similar between real and synthetic images, but synthetic images displayed less variation between samples and a narrower range. Mean entropy values were statistically similar (real: 6.97, synthetic: 7.03; p = 0.218), but the range was much narrower for synthetic images (6.91 – 7.11 versus 6.30 – 7.62). When comparing GAN-derived and manually labelled segmentations, intersection-over-union values- denoting the degree of overlap between aponeurosis labels- varied between 0.0280 – 0.612 (mean ± SD: 0.312 ± 0.159), and pennation angles were higher for the GAN-derived segmentations (25.1° vs. 19.3°; p < 0.001). For the second segmentation approach, the algorithm generally performed equally well on synthetic and real images, yielding pennation angles within the physiological range (13.8–20°).

Conclusions

We used a GAN to generate realistic B-mode ultrasound images, and extracted muscle architectural parameters from these images automatically. This approach could enable generation of large labelled datasets for image segmentation tasks, and may also be useful for data sharing. Automatic generation and labelling of ultrasound images minimises user input and overcomes several limitations associated with manual analysis.



中文翻译:

使用深度学习生成合成的B型肌肉骨骼超声图像。

背景和目标

深度学习方法在图像处理中很常见,但通常依赖于监督学习,这需要大量训练图像,通常伴随手工制作的标签。由于标记的数据通常不可用,因此需要开发一种允许自动编译此类数据的方法。在这项研究中,我们使用了生成对抗网络(GAN)来生成逼真的B模式肌肉骨骼超声图像,并测试了两种自动标记方法的适用性。

方法

我们使用的模型包括两个GAN,每个GAN都经过训练可以将图像从一个域传输到另一个域。这两个输入是一组腓肠肌内侧肌的100张纵向图像,以及一组包含两个腱膜和随机数个“束”的100个合成分段式口罩。该模型输出一组合成超声图像和每个实际输入图像的自动分割。这种自动分割过程是我们评估的两种方法之一。第二种方法涉及合成超声图像,然后将这些图像馈送到基于ImageJ / Fiji的自动算法中,以确定它是否可以检测到腱膜和肌肉束。

结果

真实图像和合成图像之间的直方图分布相似,但是合成图像显示的样本之间的差异较小,范围较窄。平均熵值在统计上相似(真实:6.97,合成:7.03;p = 0.218),但合成图像的范围要窄得多(6.91 – 7.11对6.30 – 7.62)。比较GAN分割和手动标记的分割时,表示腱膜标记之间重叠程度的相交点在0.0280 – 0.612之间变化(平均值±SD:0.312±0.159),并且GAN-的插入角度更高衍生分割(25.1°与19.3°;p<0.001)。对于第二种分割方法,该算法通常在合成图像和真实图像上均表现出色,产生的斜角在生理范围(13.8–20°)内。

结论

我们使用GAN生成逼真的B型超声图像,并从这些图像中自动提取肌肉结构参数。这种方法可以启用用于图像分割任务的大标签数据集的生成,并且对于数据共享也可能有用。超声图像的自动生成和标记可以最大程度地减少用户输入,并克服了与手动分析相关的一些限制。

更新日期:2020-06-04
down
wechat
bug