当前位置: X-MOL 学术Neuroinformatics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Integrating Multimodal and Longitudinal Neuroimaging Data with Multi-Source Network Representation Learning
Neuroinformatics ( IF 3 ) Pub Date : 2021-05-12 , DOI: 10.1007/s12021-021-09523-w
Wen Zhang 1 , B Blair Braden 2 , Gustavo Miranda 1 , Kai Shu 3 , Suhang Wang 4 , Huan Liu 1 , Yalin Wang 1
Affiliation  

Uncovering the complex network of the brain is of great interest to the field of neuroimaging. Mining from these rich datasets, scientists try to unveil the fundamental biological mechanisms in the human brain. However, neuroimaging data collected for constructing brain networks is generally costly, and thus extracting useful information from a limited sample size of brain networks is demanding. Currently, there are two common trends in neuroimaging data collection that could be exploited to gain more information: 1) multimodal data, and 2) longitudinal data. It has been shown that these two types of data provide complementary information. Nonetheless, it is challenging to learn brain network representations that can simultaneously capture network properties from multimodal as well as longitudinal datasets. Here we propose a general fusion framework for multi-source learning of brain networks – multimodal brain network fusion with longitudinal coupling (MMLC). In our framework, three layers of information are considered, including cross-sectional similarity, multimodal coupling, and longitudinal consistency. Specifically, we jointly factorize multimodal networks and construct a rotation-based constraint to couple network variance across time. We also adopt the consensus factorization as the group consistent pattern. Using two publicly available brain imaging datasets, we demonstrate that MMLC may better predict psychometric scores than some other state-of-the-art brain network representation learning algorithms. Additionally, the discovered significant brain regions are synergistic with previous literature. Our new approach may boost statistical power and sheds new light on neuroimaging network biomarkers for future psychometric prediction research by integrating longitudinal and multimodal neuroimaging data.



中文翻译:

将多模态和纵向神经影像学数据与多源网络表示学习相结合

揭示大脑的复杂网络对神经影像学领域非常重要。通过挖掘这些丰富的数据集,科学家们试图揭示人脑中的基本生物学机制。然而,为构建大脑网络而收集的神经影像数据通常成本高昂,因此从有限的大脑网络样本量中提取有用的信息要求很高。目前,神经影像学数据收集有两种常见趋势,可用于获取更多信息:1) 多模态数据,以及 2) 纵向数据。已经表明,这两种类型的数据提供了互补信息。尽管如此,学习可以同时从多模态和纵向数据集中捕获网络属性的大脑网络表示具有挑战性。在这里,我们提出了一种用于脑网络多源学习的通用融合框架——纵向耦合多模态脑网络融合 (MMLC)。在我们的框架中,考虑了三层信息,包括横截面相似性、多模态耦合和纵向一致性。具体来说,我们联合分解多模态网络并构建基于旋转的约束来耦合跨时间的网络方差。我们还采用共识分解作为组一致模式。使用两个公开可用的大脑成像数据集,我们证明 MMLC 可以比其他一些最先进的大脑网络表示学习算法更好地预测心理测量分数。此外,发现的重要大脑区域与以前的文献具有协同作用。

更新日期:2021-05-12
down
wechat
bug