当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-task pre-training of deep neural networks for digital pathology
IEEE Journal of Biomedical and Health Informatics ( IF 7.7 ) Pub Date : 2021-02-01 , DOI: 10.1109/jbhi.2020.2992878
Romain Mormont , Pierre Geurts , Raphael Maree

In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.

中文翻译:

用于数字病理学的深度神经网络的多任务预训练

在这项工作中,我们研究了多任务学习作为数字病理学分类任务预训练模型的一种方式。其动机是多年来社区发布了许多中小型数据集,而该领域中没有类似于 ImageNet 的大规模数据集。我们首先将许多数字病理数据集组装并转换为包含 22 个分类任务和近 90 万张图像的池。然后,我们提出了一个简单的架构和训练方案,用于创建可转移模型和稳健的评估和选择协议,以评估我们的方法。根据目标任务,我们表明我们用作特征提取器的模型要么比 ImageNet 预训练模型显着改进,要么提供可比的性能。
更新日期:2021-02-01
down
wechat
bug