当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Logarithmic Schatten-p Norm Minimization for Tensorial Multi-view Subspace Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 6-1-2022 , DOI: 10.1109/tpami.2022.3179556
Jipeng Guo 1 , Yanfeng Sun 1 , Junbin Gao 2 , Yongli Hu 1 , Baocai Yin 1
Affiliation  

The low-rank tensor could characterize inner structure and explore high-order correlation among multi-view representations, which has been widely used in multi-view clustering. Existing approaches adopt the tensor nuclear norm (TNN) as a convex approximation of non-convex tensor rank function. However, TNN treats the different singular values equally and over-penalizes the main rank components, leading to sub-optimal tensor representation. In this paper, we devise a better surrogate of tensor rank, namely the tensor logarithmic Schatten-pp norm (TLSp\text{TLS}_{p}N), which fully considers the physical difference between singular values by the non-convex and non-linear penalty function. Further, a tensor logarithmic Schatten-pp norm minimization (TLSp\text{TLS}_{p}NM)-based multi-view subspace clustering (TLSp\text{TLS}_{p}NM-MSC) model is proposed. Specially, the proposed TLSp\text{TLS}_{p}NM can not only protect the larger singular values encoded with useful structural information, but also remove the smaller ones encoded with redundant information. Thus, the learned tensor representation with compact low-rank structure will well explore the complementary information and accurately characterize the high-order correlation among multi-views. The alternating direction method of multipliers (ADMM) is used to solve the non-convex multi-block TLSp\text{TLS}_{p}NM-MSC model where the challenging TLSp\text{TLS}_{p}NM problem is carefully handled. Importantly, the algorithm convergence analysis is mathematically established by showing that the sequence generated by the algorithm is of Cauchy and converges to a Karush-Kuhn-Tucker (KKT) point. Experimental results on nine benchmark databases reveal the superiority of the TLSp\text{TLS}_{p}NM-MSC model.

中文翻译:


张量多视图子空间聚类的对数 Schatten-p 范数最小化



低秩张量可以表征内部结构并探索多视图表示之间的高阶相关性,在多视图聚类中得到了广泛的应用。现有方法采用张量核范数(TNN)作为非凸张量秩函数的凸近似。然而,TNN 同等对待不同的奇异值,并对主要秩分量进行过度惩罚,导致张量表示次优。在本文中,我们设计了一种更好的张量秩替代,即张量对数 Schatten-pp 范数 (TLSp\text{TLS}_{p}N),它充分考虑了非凸和非凸奇异值之间的物理差异。非线性惩罚函数。此外,提出了基于张量对数 Schatten-pp 范数最小化 (TLSp\text{TLS}_{p}NM) 的多视图子空间聚类 (TLSp\text{TLS}_{p}NM-MSC) 模型。特别地,所提出的 TLSp\text{TLS}_{p}NM 不仅可以保护用有用结构信息编码的较大奇异值,而且可以去除用冗余信息编码的较小奇异值。因此,学习到的具有紧凑低秩结构的张量表示将很好地探索互补信息并准确地表征多视图之间的高阶相关性。乘子交替方向法 (ADMM) 用于求解非凸多块 TLSp\text{TLS}_{p}NM-MSC 模型,其中具有挑战性的 TLSp\text{TLS}_{p}NM 问题是小心处理。重要的是,算法收敛分析是通过证明算法生成的序列是柯西序列并收敛到Karush-Kuhn-Tucker(KKT)点而在数学上建立的。在九个基准数据库上的实验结果揭示了 TLSp\text{TLS}_{p}NM-MSC 模型的优越性。
更新日期:2024-08-26
down
wechat
bug