当前位置: X-MOL 学术IEEE Signal Proc. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interpreting Brain Biomarkers: Challenges and solutions in interpreting machine learning-based predictive neuroimaging
IEEE Signal Processing Magazine ( IF 9.4 ) Pub Date : 6-28-2022 , DOI: 10.1109/msp.2022.3155951
Rongtao Jiang 1 , Choong-Wan Woo 2, 3, 4 , Shile Qi 5 , Jing Wu 6 , Jing Sui 7
Affiliation  

Predictive modeling of neuroimaging data (predictive neuroimaging) for evaluating individual differences in various behavioral phenotypes and clinical outcomes is of growing interest. However, the field is experiencing challenges regarding the interpretability of results. Approaches to defining the specific contribution of functional connections, regions, and networks in prediction models are urgently needed, potentially helping to explore underlying mechanisms. In this article, we systematically review methods and applications for interpreting brain signatures derived from predictive neuroimaging, based on a survey of 326 research articles. Strengths, limitations, and suitable conditions for major interpretation strategies are also deliberated. An in-depth discussion of common issues in the existing literature and corresponding recommendations to address these pitfalls are provided. We highly recommend exhaustive validation of the reliability and interpretability of biomarkers across multiple data sets and contexts, which could translate technical advances in neuroimaging into concrete improvements in precision medicine.

中文翻译:


解读大脑生物标志物:解读基于机器学习的预测神经影像的挑战和解决方案



用于评估各种行为表型和临床结果的个体差异的神经影像数据的预测模型(预测神经影像)越来越受到人们的关注。然而,该领域在结果的可解释性方面正面临挑战。迫切需要定义功能连接、区域和网络在预测模型中的具体贡献的方法,这可能有助于探索潜在的机制。在本文中,我们基于对 326 篇研究文章的调查,系统地回顾了解释来自预测神经影像的大脑特征的方法和应用。还讨论了主要解释策略的优点、局限性和合适条件。对现有文献中的常见问题进行了深入讨论,并提供了解决这些陷阱的相应建议。我们强烈建议跨多个数据集和环境对生物标志物的可靠性和可解释性进行详尽的验证,这可以将神经影像学的技术进步转化为精准医学的具体改进。
更新日期:2024-08-26
down
wechat
bug