当前位置: X-MOL 学术Expert Syst. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint representation learning for multi-view subspace clustering
Expert Systems with Applications ( IF 7.5 ) Pub Date : 2020-09-12 , DOI: 10.1016/j.eswa.2020.113913
Guang-Yu Zhang , Yu-Ren Zhou , Chang-Dong Wang , Dong Huang , Xiao-Yu He

Multi-view subspace clustering has made remarkable achievements in the field of multi-view learning for high-dimensional data. However, many existing multi-view subspace clustering methods still have two disadvantages. First, most of them only recover the subspace structure from either consistent or specific perspective. Second, they often fail to take advantage of the high-order information among different views. To alleviate these two issues, this paper proposes a novel multi-view subspace clustering method, which aims to learn the view-specific representation as well as the low-rank tensor representation in a unified framework. Particularly, our method learns the view-specific representation from data samples by exploiting the local structure within each view. In the meantime, we generate the low-rank tensor representation from the view-specific representation to capture the high-order correlation across multiple views. Based on the joint representation learning framework, the proposed method is able to explore the intra-view pairwise information and the inter-view complementary information, so that the underlying data structure can be revealed and then the final clustering result can be obtained through the subsequent spectral clustering. Furthermore, in the proposed Joint Representation Learning for Multi-view Subspace Clustering (JRL-MSC) method, a unified objective function is formulated, which can be efficiently optimized by the alternating direction method of multipliers. Experimental results on multiple real-world data sets have demonstrated that our method outperforms the state-of-the-art counterparts.



中文翻译:

用于多视图子空间聚类的联合表示学习

多视图子空间聚类在高维数据的多视图学习领域中取得了显著成就。但是,许多现有的多视图子空间聚类方法仍然有两个缺点。首先,它们中的大多数仅从一致或特定的角度恢复子空间结构。其次,他们常常无法利用不同视图之间的高级信息。为了缓解这两个问题,本文提出了一种新颖的多视图子空间聚类方法,旨在在统一框架中学习特定视图的表示以及低秩张量表示。特别是,我们的方法通过利用每个视图中的局部结构从数据样本中学习特定于视图的表示。同时,我们从特定于视图的表示中生成低秩张量表示,以捕获多个视图之间的高阶相关性。在联合表示学习框架的基础上,提出的方法能够探索视图内成对信息和视图间互补信息,从而揭示底层数据结构,并通过后续的获取最终聚类结果。光谱聚类。此外,在提出的多视图子空间聚类联合表示学习(JRL-MSC)方法中,制定了统一的目标函数,可以通过乘数的交替方向方法有效地优化该目标函数。在多个实际数据集上的实验结果表明,我们的方法优于最新的方法。

更新日期:2020-10-04
down
wechat
bug