当前位置: X-MOL 学术Front. Comput. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Visual Explanation for Identification of the Brain Bases for Developmental Dyslexia on fMRI Data
Frontiers in Computational Neuroscience ( IF 2.1 ) Pub Date : 2021-06-08 , DOI: 10.3389/fncom.2021.594659
Laura Tomaz Da Silva 1 , Nathalia Bianchini Esper 2, 3 , Duncan D Ruiz 1 , Felipe Meneguzzi 1 , Augusto Buchweitz 3, 4
Affiliation  

Problem: Brain imaging studies of mental health and neurodevelopmental disorders have recently included machine learning approaches to identify patients based solely on their brain activation. The goal is to identify brain-related features that generalize from smaller samples of data to larger ones; in the case of neurodevelopmental disorders, finding these patterns can help understand differences in brain function and development that underpin early signs of risk for developmental dyslexia. The success of machine learning classification algorithms on neurofunctional data has been limited to typically homogeneous data sets of few dozens of participants. More recently, larger brain imaging data sets have allowed for deep learning techniques to classify brain states and clinical groups solely from neurofunctional features. Indeed, deep learning techniques can provide helpful tools for classification in healthcare applications, including classification of structural 3D brain images. The adoption of deep learning approaches allows for incremental improvements in classification performance of larger functional brain imaging data sets, but still lacks diagnostic insights about the underlying brain mechanisms associated with disorders; moreover, a related challenge involves providing more clinically-relevant explanations from the neural features that inform classification. Methods: We target this challenge by leveraging two network visualization techniques in convolutional neural network layers responsible for learning high-level features. Using such techniques, we are able to provide meaningful images for expert-backed insights into the condition being classified. We address this challenge using a dataset that includes children diagnosed with developmental dyslexia, and typical reader children. Results: Our results show accurate classification of developmental dyslexia (94.8\%) from the brain imaging alone, while providing automatic visualizations of the features involved that match contemporary neuroscientific knowledge (brain regions involved in the reading process for the dyslexic reader group and brain regions associated with strategic control and attention processes for the typical reader group). Conclusion: Our visual explanations of deep learning models turn the accurate yet opaque conclusions from the models into evidence to the condition being studied.

中文翻译:

在 fMRI 数据上识别发育性阅读障碍的大脑基础的视觉解释

问题:心理健康和神经发育障碍的脑成像研究最近包括机器学习方法,以仅根据患者的大脑活动来识别患者。目标是识别从较小数据样本泛化到较大样本的大脑相关特征;在神经发育障碍的情况下,发现这些模式可以帮助理解大脑功能和发育的差异,这些差异是发育性阅读障碍风险的早期迹象的基础。机器学习分类算法在神经功能数据上的成功仅限于几十个参与者的典型同质数据集。最近,更大的大脑成像数据集允许深度学习技术仅根据神经功能特征对大脑状态和临床组进行分类。的确,深度学习技术可以为医疗保健应用中的分类提供有用的工具,包括结构 3D 大脑图像的分类。采用深度学习方法可以逐步改进更大的功能性脑成像数据集的分类性能,但仍然缺乏对与疾病相关的潜在大脑机制的诊断见解;此外,一个相关的挑战涉及从为分类提供信息的神经特征提供更多临床相关的解释。方法:我们通过在负责学习高级特征的卷积神经网络层中利用两种网络可视化技术来应对这一挑战。使用这些技术,我们能够为专家支持的对被分类状况的洞察提供有意义的图像。我们使用一个数据集来应对这一挑战,该数据集包括被诊断为发育性阅读障碍的儿童和典型的阅读儿童。结果:我们的结果显示仅从大脑成像中对发育性阅读障碍 (94.8\%) 进行准确分类,同时提供与当代神经科学知识相匹配的相关特征的自动可视化(阅读障碍阅读器组和大脑区域参与阅读过程的大脑区域)与典型读者群体的战略控制和注意力过程相关)。结论:我们对深度学习模型的视觉解释将模型中准确但不透明的结论转化为所研究条件的证据。我们的结果显示仅从大脑成像中准确分类发育性阅读障碍 (94.8\%),同时提供与当代神经科学知识相匹配的相关特征的自动可视化典型读者群体的战略控制和注意力过程)。结论:我们对深度学习模型的视觉解释将模型中准确但不透明的结论转化为所研究条件的证据。我们的结果显示仅从大脑成像中对发育性阅读障碍 (94.8\%) 的准确分类,同时提供与当代神经科学知识相匹配的相关特征的自动可视化典型读者群体的战略控制和注意力过程)。结论:我们对深度学习模型的视觉解释将模型中准确但不透明的结论转化为所研究条件的证据。同时提供与当代神经科学知识相匹配的特征的自动可视化(阅读障碍读者群体的阅读过程中涉及的大脑区域和典型读者群体与战略控制和注意力过程相关的大脑区域)。结论:我们对深度学习模型的视觉解释将模型中准确但不透明的结论转化为所研究条件的证据。同时提供与当代神经科学知识相匹配的特征的自动可视化(阅读障碍读者群体的阅读过程中涉及的大脑区域和典型读者群体与战略控制和注意力过程相关的大脑区域)。结论:我们对深度学习模型的视觉解释将模型中准确但不透明的结论转化为所研究条件的证据。
更新日期:2021-06-08
down
wechat
bug