当前位置: X-MOL 学术Mach. Vis. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep convolutional neural networks with transfer learning for automated brain image classification
Machine Vision and Applications ( IF 2.4 ) Pub Date : 2020-03-27 , DOI: 10.1007/s00138-020-01069-2
Taranjit Kaur , Tapan Kumar Gandhi

MR brain image categorization has been an active research domain from the last decade. Several techniques have been devised in the past for MR image categorization, starting from classical to the deep learning methods like convolutional neural networks (CNNs). Classical machine learning methods need handcrafted features to perform classification. The CNNs, on the other hand, perform classification by extracting image features directly from raw images via tuning the parameters of the convolutional and pooling layer. The features extracted by CNN strongly depend on the size of the training dataset. If the training dataset is small, CNN tends to overfit after several epochs. So, deep CNNs (DCNNs) with transfer learning have evolved. The prime objective of the present work is to explore the capability of different pre-trained DCNN models with transfer learning for pathological brain image classification. Various pre-trained DCNNs, namely Alexnet, Resnet50, GoogLeNet, VGG-16, Resnet101, VGG-19, Inceptionv3, and InceptionResNetV2, were used in the present study. The last few layers of these models were replaced to accommodate new image categories for our application. These models were extensively evaluated on data from Harvard, clinical, and benchmark Figshare repository. The dataset was then partitioned in the ratio 60:40 for training and testing. The validation on the test set reveals that the pre-trained Alexnet with transfer learning exhibited the best performance in less time compared to other proposed models. The proposed method is more generic as it does not need any handcrafted features and can achieve an accuracy value of 100%, 94%, and 95.92% for three datasets. Other performance measures used in the study include sensitivity, specificity, precision, false positive rate, error, F-score, Mathew correlation coefficient, and area under the curve. The results are compared with both the traditional machine learning methods and those using CNN.

中文翻译:

具有转移学习功能的深度卷积神经网络用于自动脑图像分类

在过去的十年中,MR脑图像分类一直是活跃的研究领域。过去,已经为磁共振图像分类设计了几种技术,从经典的方法到深度学习方法(例如卷积神经网络(CNN))。经典的机器学习方法需要手工制作的功能来执行分类。另一方面,CNN通过调整卷积和池化层的参数直接从原始图像中提取图像特征来执行分类。CNN提取的特征在很大程度上取决于训练数据集的大小。如果训练数据集很小,那么在几个时期之后,CNN往往会过拟合。因此,具有转移学习的深层CNN(DCNN)已经发展。本工作的主要目的是探索具有转移学习的不同预训练DCNN模型对病理性脑图像分类的能力。在本研究中使用了各种预先训练的DCNN,即Alexnet,Resnet50,GoogLeNet,VGG-16,Resnet101,VGG-19,Inceptionv3和InceptionResNetV2。这些模型的最后几层已被替换,以适应我们应用程序的新图像类别。这些模型在哈佛,临床和基准Figshare存储库中的数据上进行了广泛评估。然后将数据集按60:40的比例进行划分,以进行训练和测试。测试集上的验证表明,与其他建议的模型相比,带有传递学习功能的预训练Alexnet在较短的时间内表现出最佳性能。所提出的方法更通用,因为它不需要任何手工特征,并且对于三个数据集可以达到100%,94%和95.92%的精度值。该研究中使用的其他性能指标包括敏感性,特异性,准确性,假阳性率,错误,F评分,Mathew相关系数和曲线下面积。将结果与传统机器学习方法和使用CNN的方法进行比较。
更新日期:2020-03-27
down
wechat
bug