当前位置: X-MOL 学术Artif. Intell. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transfer learning in medical image segmentation: New insights from analysis of the dynamics of model parameters and learned representations
Artificial Intelligence in Medicine ( IF 7.5 ) Pub Date : 2021-04-23 , DOI: 10.1016/j.artmed.2021.102078
Davood Karimi 1 , Simon K Warfield 1 , Ali Gholipour 1
Affiliation  

We present a critical assessment of the role of transfer learning in training fully convolutional networks (FCNs) for medical image segmentation. We first show that although transfer learning reduces the training time on the target task, improvements in segmentation accuracy are highly task/data-dependent. Large improvements are observed only when the segmentation task is more challenging and the target training data is smaller. We shed light on these observations by investigating the impact of transfer learning on the evolution of model parameters and learned representations. We observe that convolutional filters change little during training and still look random at convergence. We further show that quite accurate FCNs can be built by freezing the encoder section of the network at random values and only training the decoder section. At least for medical image segmentation, this finding challenges the common belief that the encoder section needs to learn data/task-specific representations. We examine the evolution of FCN representations to gain a deeper insight into the effects of transfer learning on the training dynamics. Our analysis shows that although FCNs trained via transfer learning learn different representations than FCNs trained with random initialization, the variability among FCNs trained via transfer learning can be as high as that among FCNs trained with random initialization. Moreover, feature reuse is not restricted to the early encoder layers; rather, it can be more significant in deeper layers. These findings offer new insights and suggest alternative ways of training FCNs for medical image segmentation.



中文翻译:

医学图像分割中的迁移学习:模型参数动态分析和学习表示的新见解

我们对迁移学习在训练用于医学图像分割的全卷积网络(FCN)中的作用进行了严格的评估。我们首先表明,尽管迁移学习减少了目标任务的训练时间,但分割准确性的提高高度依赖于任务/数据。仅当分割任务更具挑战性且目标训练数据较小时,才会观察到较大的改进。我们通过研究迁移学习对模型参数和学习表示的演化的影响来阐明这些观察结果。我们观察到卷积滤波器在训练过程中变化很小,并且在收敛时看起来仍然是随机的。我们进一步表明,通过将网络的编码器部分冻结为随机值并仅训练解码器部分,可以构建相当准确的 FCN。至少对于医学图像分割来说,这一发现挑战了编码器部分需要学习数据/特定任务表示的普遍看法。我们研究 FCN 表示的演变,以更深入地了解迁移学习对训练动态的影响。我们的分析表明,尽管通过迁移学习训练的 FCN 学习的表示与通过随机初始化训练的 FCN 不同,但通过迁移学习训练的 FCN 之间的变异性可能与通过随机初始化训练的 FCN 之间的变异性一样高。此外,特征重用并不局限于早期的编码器层;相反,它在更深的层中可能更重要。这些发现提供了新的见解,并提出了训练 FCN 进行医学图像分割的替代方法。

更新日期:2021-04-30
down
wechat
bug