Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fine-Tuning U-Net for Ultrasound Image Segmentation: Different Layers, Different Outcomes
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control ( IF 3.0 ) Pub Date : 2020-08-07 , DOI: 10.1109/tuffc.2020.3015081
Mina Amiri , Rupert Brooks , Hassan Rivaz

One way of resolving the problem of scarce and expensive data in deep learning for medical applications is using transfer learning and fine-tuning a network which has been trained on a large data set. The common practice in transfer learning is to keep the shallow layers unchanged and to modify deeper layers according to the new data set. This approach may not work when using a U-Net and when moving from a different domain to ultrasound (US) images due to their drastically different appearance. In this study, we investigated the effect of fine-tuning different sets of layers of a pretrained U-Net for US image segmentation. Two different schemes were analyzed, based on two different definitions of shallow and deep layers. We studied simulated US images, as well as two human US data sets. We also included a chest X-ray data set. The results showed that choosing which layers to fine-tune is a critical task. In particular, they demonstrated that fine-tuning the last layers of the network, which is the common practice for classification networks, is often the worst strategy. It may therefore be more appropriate to fine-tune the shallow layers rather than deep layers in US image segmentation when using a U-Net. Shallow layers learn lower level features which are critical in automatic segmentation of medical images. Even when a large US data set is available, we observed that fine-tuning shallow layers is a faster approach compared to fine-tuning the whole network.

中文翻译:

精细调整的U-Net,用于超声图像分割:不同的层,不同的结果

解决医学应用深度学习中稀疏和昂贵的数据问题的一种方法是使用转移学习和微调已经在大数据集上训练的网络。迁移学习的常见做法是保持浅层不变,并根据新数据集修改较深层。当使用U-Net以及由于其外观完全不同而从其他域移至超声(US)图像时,此方法可能不起作用。在这项研究中,我们调查了针对美国图像分割对预训练的U-Net的不同层进行微调的效果。基于浅层和深层的两种不同定义,分析了两种不同的方案。我们研究了模拟的美国图像以及两个人类美国数据集。我们还包括了胸部X射线数据集。结果表明,选择要微调的图层是一项关键任务。特别是,他们证明,微调网络的最后一层(这是分类网络的常见做法)通常是最糟糕的策略。因此,在使用U-Net时,在US图像分割中微调较浅的图层而不是较深的图层可能更合适。浅层学习较低级别的功能,这些功能对于自动分割医学图像至关重要。即使有大量的美国数据集,我们也观察到与微调整个网络相比,微调浅层是一种更快的方法。通常是最糟糕的策略。因此,在使用U-Net时,在US图像分割中微调较浅的图层而不是较深的图层可能更合适。浅层学习较低级别的功能,这些功能对于自动分割医学图像至关重要。即使有大量的美国数据集,我们也观察到与微调整个网络相比,微调浅层是一种更快的方法。通常是最糟糕的策略。因此,在使用U-Net时,在US图像分割中微调较浅的图层而不是较深的图层可能更合适。浅层学习较低级别的功能,这些功能对于自动分割医学图像至关重要。即使有大量的美国数据集,我们也观察到与微调整个网络相比,微调浅层是一种更快的方法。
更新日期:2020-08-07
down
wechat
bug