当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Subspace Learning by $$\ell ^{0}$$ℓ0-Induced Sparsity
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2018-07-17 , DOI: 10.1007/s11263-018-1092-4
Yingzhen Yang , Jiashi Feng , Nebojsa Jojic , Jianchao Yang , Thomas S. Huang

Subspace clustering methods partition the data that lie in or close to a union of subspaces in accordance with the subspace structure. Such methods with sparsity prior, such as sparse subspace clustering (SSC) (Elhamifar and Vidal in IEEE Trans Pattern Anal Mach Intell 35(11):2765–2781, 2013) with the sparsity induced by the $$\ell ^{1}$$ℓ1-norm, are demonstrated to be effective in subspace clustering. Most of those methods require certain assumptions, e.g. independence or disjointness, on the subspaces. However, these assumptions are not guaranteed to hold in practice and they limit the application of existing sparse subspace clustering methods. In this paper, we propose $$\ell ^{0}$$ℓ0-induced sparse subspace clustering ($$\ell ^{0}$$ℓ0-SSC). In contrast to the required assumptions, such as independence or disjointness, on subspaces for most existing sparse subspace clustering methods, we prove that $$\ell ^{0}$$ℓ0-SSC guarantees the subspace-sparse representation, a key element in subspace clustering, for arbitrary distinct underlying subspaces almost surely under the mild i.i.d. assumption on the data generation. We also present the “no free lunch” theorem which shows that obtaining the subspace representation under our general assumptions can not be much computationally cheaper than solving the corresponding $$\ell ^{0}$$ℓ0 sparse representation problem of $$\ell ^{0}$$ℓ0-SSC. A novel approximate algorithm named Approximate $$\ell ^{0}$$ℓ0-SSC (A$$\ell ^{0}$$ℓ0-SSC) is developed which employs proximal gradient descent to obtain a sub-optimal solution to the optimization problem of $$\ell ^{0}$$ℓ0-SSC with theoretical guarantee. The sub-optimal solution is used to build a sparse similarity matrix upon which spectral clustering is performed for the final clustering results. Extensive experimental results on various data sets demonstrate the superiority of A$$\ell ^{0}$$ℓ0-SSC compared to other competing clustering methods. Furthermore, we extend $$\ell ^{0}$$ℓ0-SSC to semi-supervised learning by performing label propagation on the sparse similarity matrix learnt by A$$\ell ^{0}$$ℓ0-SSC and demonstrate the effectiveness of the resultant semi-supervised learning method termed $$\ell ^{0}$$ℓ0-sparse subspace label propagation ($$\ell ^{0}$$ℓ0-SSLP).

中文翻译:

$$\ell ^{0}$$ℓ0-Induced Spasity 的子空间学习

子空间聚类方法根据子空间结构对位于或接近子空间并集的数据进行分区。此类具有稀疏先验的方法,例如稀疏子空间聚类 (SSC)(Elhamifar 和 Vidal in IEEE Trans Pattern Anal Mach Intell 35(11):2765–2781, 2013)以及由 $$\ell ^{1} 引起的稀疏性$$ℓ1-norm,被证明在子空间聚类中是有效的。大多数这些方法需要对子空间进行某些假设,例如独立性或不相交性。然而,这些假设并不能保证在实践中成立,并且它们限制了现有稀疏子空间聚类方法的应用。在本文中,我们提出了 $$\ell ^{0}$$ℓ0 诱导的稀疏子空间聚类($$\ell ^{0}$$ℓ0-SSC)。与所需的假设相反,例如独立性或不相交性,在大多数现有稀疏子空间聚类方法的子空间上,我们证明了 $$\ell ^{0}$$ℓ0-SSC 保证了子空间稀疏表示,这是子空间聚类中的一个关键元素,对于任意不同的底层子空间几乎可以肯定地在温和的对数据生成的 iid 假设。我们还提出了“没有免费午餐”定理,该定理表明在我们的一般假设下获得子空间表示在计算上不会比解决 $$\ell 的相应 $$\ell ^{0}$$ℓ0 稀疏表示问题便宜多少^{0}$$ℓ0-SSC。开发了一种名为 Approximate $$\ell ^{0}$$ℓ0-SSC (A$$\ell ^{0}$$ℓ0-SSC) 的新近似算法,该算法采用近端梯度下降来获得次优解具有理论保证的$$\ell ^{0}$$ℓ0-SSC 的优化问题。次优解用于构建稀疏相似矩阵,在该矩阵上对最终聚类结果进行谱聚类。在各种数据集上的大量实验结果证明了 A$$\ell ^{0}$$ℓ0-SSC 与其他竞争聚类方法相比的优越性。此外,我们通过对 A$$\ell ^{0}$$ℓ0-SSC 学习的稀疏相似度矩阵执行标签传播,将 $$\ell ^{0}$$ℓ0-SSC 扩展到半监督学习,并证明了由此产生的半监督学习方法的有效性称为 $$\ell ^{0}$$ℓ0-稀疏子空间标签传播 ($$\ell ^{0}$$ℓ0-SSLP)。在各种数据集上的大量实验结果证明了 A$$\ell ^{0}$$ℓ0-SSC 与其他竞争聚类方法相比的优越性。此外,我们通过对 A$$\ell ^{0}$$ℓ0-SSC 学习的稀疏相似度矩阵执行标签传播,将 $$\ell ^{0}$$ℓ0-SSC 扩展到半监督学习,并证明了由此产生的半监督学习方法的有效性称为 $$\ell ^{0}$$ℓ0-稀疏子空间标签传播 ($$\ell ^{0}$$ℓ0-SSLP)。在各种数据集上的大量实验结果证明了 A$$\ell ^{0}$$ℓ0-SSC 与其他竞争聚类方法相比的优越性。此外,我们通过对 A$$\ell ^{0}$$ℓ0-SSC 学习的稀疏相似度矩阵执行标签传播,将 $$\ell ^{0}$$ℓ0-SSC 扩展到半监督学习,并证明了由此产生的半监督学习方法的有效性称为 $$\ell ^{0}$$ℓ0-稀疏子空间标签传播 ($$\ell ^{0}$$ℓ0-SSLP)。
更新日期:2018-07-17
down
wechat
bug