当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Constrained Clustering With Dissimilarity Propagation-Guided Graph-Laplacian PCA
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2020-08-27 , DOI: 10.1109/tnnls.2020.3016397
Yuheng Jia , Junhui Hou , Sam Kwong

In this article, we propose a novel model for constrained clustering, namely, the dissimilarity propagation-guided graph-Laplacian principal component analysis (DP-GLPCA). By fully utilizing a limited number of weakly supervisory information in the form of pairwise constraints, the proposed DP-GLPCA is capable of capturing both the local and global structures of input samples to exploit their characteristics for excellent clustering. More specifically, we first formulate a convex semisupervised low-dimensional embedding model by incorporating a new dissimilarity regularizer into GLPCA (i.e., an unsupervised dimensionality reduction model), in which both the similarity and dissimilarity between low-dimensional representations are enforced with the constraints to improve their discriminability. An efficient iterative algorithm based on the inexact augmented Lagrange multiplier is designed to solve it with the global convergence guaranteed. Furthermore, we innovatively propose to propagate the cannot-link constraints (i.e., dissimilarity) to refine the dissimilarity regularizer to be more informative. The resulting DP model is iteratively solved, and we also prove that it can converge to a Karush–Kuhn–Tucker point. Extensive experimental results over nine commonly used benchmark data sets show that the proposed DP-GLPCA can produce much higher clustering accuracy than state-of-the-art constrained clustering methods. Besides, the effectiveness and advantage of the proposed DP model are experimentally verified. To the best of our knowledge, it is the first time to investigate DP, which is contrast to existing pairwise constraint propagation that propagates similarity. The code is publicly available at https://github.com/jyh-learning/DP-GLPCA .

中文翻译:

具有相异性传播引导图-拉普拉斯 PCA 的约束聚类

在本文中,我们提出了一种新的约束聚类模型,即相异传播引导图-拉普拉斯主成分分析(DP-GLPCA)。通过以成对约束的形式充分利用有限数量的弱监督信息,所提出的 DP-GLPCA 能够捕获输入样本的局部和全局结构,以利用它们的特性进行出色的聚类。更具体地说,我们首先通过将新的相异正则化器合并到 GLPCA(即无监督的降维模型)中来制定凸半监督低维嵌入模型,其中低维表示之间的相似性和相异性都通过约束来强制执行提高他们的辨别能力。设计了一种基于不精确增广拉格朗日乘子的高效迭代算法,在保证全局收敛的情况下对其进行求解。此外,我们创新地建议传播不能链接的约束(即相异性)以细化相异性正则化器以提供更多信息。得到的 DP 模型是迭代求解的,我们还证明它可以收敛到 Karush-Kuhn-Tucker 点。九个常用基准数据集的广泛实验结果表明,所提出的 DP-GLPCA 可以产生比最先进的约束聚类方法更高的聚类精度。此外,所提出的DP模型的有效性和优势得到了实验验证。据我们所知,这是第一次调查DP,这与传播相似性的现有成对约束传播形成对比。该代码可在以下位置公开获得https://github.com/jyh-learning/DP-GLPCA .
更新日期:2020-08-27
down
wechat
bug