当前位置: X-MOL 学术Stat. Anal. Data Min. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coupled support tensor machine classification for multimodal neuroimaging data
Statistical Analysis and Data Mining ( IF 1.3 ) Pub Date : 2022-05-23 , DOI: 10.1002/sam.11587
Peide Li 1 , Seyyid Emre Sofuoglu 2 , Selin Aviyente 2 , Tapabrata Maiti 3
Affiliation  

Multimodal data arise in various applications where information about the same phenomenon is acquired from multiple sensors and across different imaging modalities. Learning from multimodal data is of great interest in machine learning and statistics research as this offers the possibility of capturing complementary information among modalities. Multimodal modeling helps to explain the interdependence between heterogeneous data sources, discovers new insights that may not be available from a single modality, and improves decision-making. Recently, coupled matrix–tensor factorization has been introduced for multimodal data fusion to jointly estimate latent factors and identify complex interdependence among the latent factors. However, most of the prior work on coupled matrix–tensor factors focuses on unsupervised learning and there is little work on supervised learning using the jointly estimated latent factors. This paper considers the multimodal tensor data classification problem. A coupled support tensor machine (C-STM) built upon the latent factors jointly estimated from the advanced coupled matrix–tensor factorization is proposed. C-STM combines individual and shared latent factors with multiple kernels and estimates a maximal-margin classifier for coupled matrix–tensor data. The classification risk of C-STM is shown to converge to the optimal Bayes risk, making it a statistically consistent rule. C-STM is validated through simulation studies as well as a simultaneous analysis on electroencephalography with functional magnetic resonance imaging data. The empirical evidence shows that C-STM can utilize information from multiple sources and provide a better classification performance than traditional single-mode classifiers.

中文翻译:

多模态神经影像数据的耦合支持张量机分类

多模态数据出现在各种应用中,在这些应用中,从多个传感器和不同的成像模态中获取有关同一现象的信息。从多模态数据中学习对机器学习和统计研究非常感兴趣,因为这提供了在模态之间捕获互补信息的可能性。多模态建模有助于解释异构数据源之间的相互依赖关系,发现单一模态可能无法获得的新见解,并改进决策制定。最近,为多模态数据融合引入了耦合矩阵-张量分解,以联合估计潜在因素并识别潜在因素之间复杂的相互依赖关系。然而,先前关于耦合矩阵张量因子的大部分工作都集中在无监督学习上,而使用联合估计的潜在因子进行监督学习的工作很少。本文考虑了多模态张量数据分类问题。提出了一种耦合支持张量机(C-STM),它建立在从先进的耦合矩阵-张量分解中联合估计的潜在因素上。C-STM 将单个和共享的潜在因子与多个内核相结合,并估计耦合矩阵张量数据的最大边距分类器。C-STM 的分类风险显示收敛到最佳贝叶斯风险,使其成为统计上一致的规则。C-STM 通过模拟研究以及脑电图与功能性磁共振成像数据的同步分析得到验证。
更新日期:2022-05-23
down
wechat
bug