当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive partial graph learning and fusion for incomplete multi-view clustering
International Journal of Intelligent Systems ( IF 5.0 ) Pub Date : 2021-09-14 , DOI: 10.1002/int.22655
Xiao Zheng 1 , Xinwang Liu 1 , Jiajia Chen 2 , En Zhu 1
Affiliation  

Most of existing multi-view clustering methods assume that different feature views of data are fully observed. However, it is common that only portions of data features can be obtained in many practical applications. The presence of incomplete feature views hinders the performance of the conventional multi-view clustering methods to a large extent. Recently proposed incomplete multi-view clustering methods often focus on directly learning a common representation or a consensus affinity similarity graph from available feature views while ignore the valuable information hidden in the missing views. In this study, we present a novel incomplete multi-view clustering method via adaptive partial graph learning and fusion (APGLF), which can capture the local data structure of both within-view and cross-view. Specifically, we use the available data of each view to learn a corresponding view-specific partial graph, in which the within-view local structure can be well preserved. Then we design a cross-view graph fusion term to learn a consensus complete graph for different views, which can take advantage of the complementary information hidden in the view-specific partial graphs learned from incomplete views. In addition, a rank constraint is imposed on the graph Laplacian matrix of the fused graph to better recover the optimal cluster structure of original data. Therefore, APGLF integrates within-view partial graph learning, cross-view partial graph fusion and cluster structure recovering into a unified framework. Experiments on five incomplete multi-view data sets are conducted to validate the efficacy of APGLF when compared with eight state-of-the-art methods.

中文翻译:

不完全多视图聚类的自适应局部图学习和融合

大多数现有的多视图聚类方法都假设完全观察到数据的不同特征视图。然而,在许多实际应用中,通常只能获得部分数据特征。不完整的特征视图的存在在很大程度上阻碍了传统多视图聚类方法的性能。最近提出的不完全多视图聚类方法通常侧重于从可用的特征视图中直接学习公共表示或共识相似度图,而忽略隐藏在缺失视图中的有价值的信息。在这项研究中,我们通过自适应局部图学习和融合(APGLF)提出了一种新的不完全多视图聚类方法,该方法可以捕获视图内和交叉视图的局部数据结构。具体来说,我们使用每个视图的可用数据来学习相应的特定于视图的局部图,其中可以很好地保留视图内局部结构。然后我们设计了一个跨视图图融合项来学习不同视图的共识完整图,它可以利用从不完整视图中学习到的特定视图部分图中隐藏的补充信息。此外,对融合图的图拉普拉斯矩阵施加秩约束,以更好地恢复原始数据的最优聚类结构。因此,APGLF 将视图内局部图学习、跨视图局部图融合和集群结构恢复集成到一个统一的框架中。与八种最先进的方法相比,对五个不完整的多视图数据集进行了实验,以验证 APGLF 的功效。
更新日期:2021-11-23
down
wechat
bug