当前位置: X-MOL 学术J. Digit. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can a Machine Learn from Radiologists' Visual Search Behaviour and Their Interpretation of Mammograms-a Deep-Learning Study.
Journal of Digital Imaging ( IF 2.9 ) Pub Date : 2019-10-01 , DOI: 10.1007/s10278-018-00174-z
Suneeta Mall 1 , Patrick C Brennan 1 , Claudia Mello-Thoms 1, 2
Affiliation  

Visual search behaviour and the interpretation of mammograms have been studied for errors in breast cancer detection. We aim to ascertain whether machine-learning models can learn about radiologists' attentional level and the interpretation of mammograms. We seek to determine whether these models are practical and feasible for use in training and teaching programmes. Eight radiologists of varying experience levels in reading mammograms reviewed 120 two-view digital mammography cases (59 cancers). Their search behaviour and decisions were captured using a head-mounted eye-tracking device and software allowing them to record their decisions. This information from radiologists was used to build an ensembled machine-learning model using top-down hierarchical deep convolution neural network. Separately, a model to determine type of missed cancer (search, perception or decision-making) was also built. Analysis and comparison of variants of these models using different convolution networks with and without transfer learning were also performed. Our ensembled deep-learning network architecture can be trained to learn about radiologists' attentional level and decisions. High accuracy (95%, p value ≅ 0 [better than dumb/random model]) and high agreement between true and predicted values (kappa = 0.83) in such modelling can be achieved. Transfer learning techniques improve by < 10% with the performance of this model. We also show that spatial convolution neural networks are insufficient in determining the type of missed cancers. Ensembled hierarchical deep convolution machine-learning models are plausible in modelling radiologists' attentional level and their interpretation of mammograms. However, deep convolution networks fail to characterise the type of false-negative decisions.

中文翻译:

机器能否从放射科医生的视觉搜索行为及其对乳房 X 光检查的解释中学习 - 一项深度学习研究。

视觉搜索行为和乳房X光检查的解释已经被研究以发现乳腺癌检测中的错误。我们的目标是确定机器学习模型是否可以了解放射科医生的注意力水平和乳房 X 光检查的解释。我们试图确定这些模型在培训和教学项目中是否实用且可行。八位具有不同阅读乳房 X 光照片经验水平的放射科医生审查了 120 个双视图数字乳房 X 光检查病例(59 种癌症)。他们的搜索行为和决定是使用头戴式眼球追踪设备和软件捕获的,允许他们记录他们的决定。来自放射科医生的这些信息被用来使用自上而下的分层深度卷积神经网络构建集成机器学习模型。另外,还建立了一个确定漏诊癌症类型(搜索、感知或决策)的模型。还对使用带有和不带有迁移学习的不同卷积网络的这些模型的变体进行了分析和比较。我们的集成深度学习网络架构可以经过训练来了解放射科医生的注意力水平和决策。在这种建模中可以实现高精度(95%,p 值≅ 0 [优于哑/随机模型])和真实值与预测值之间的高度一致性(kappa = 0.83)。该模型的性能提升了迁移学习技术 < 10%。我们还表明,空间卷积神经网络不足以确定漏诊的癌症类型。集成分层深度卷积机器学习模型在建模放射科医生的注意力水平及其对乳房 X 光检查的解释方面是合理的。然而,深度卷积网络无法表征假阴性决策的类型。
更新日期:2019-11-01
down
wechat
bug