当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Simultaneous Global and Local Graph Structure Preserving for Multiple Kernel Clustering.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2020-05-14 , DOI: 10.1109/tnnls.2020.2991366
Zhenwen Ren , Quansen Sun

Multiple kernel learning (MKL) is generally recognized to perform better than single kernel learning (SKL) in handling nonlinear clustering problem, largely thanks to MKL avoids selecting and tuning predefined kernel. By integrating the self-expression learning framework, the graph-based MKL subspace clustering has recently attracted considerable attention. However, the graph structure of data in kernel space is largely ignored by previous MKL methods, which is a key concept of affinity graph construction for spectral clustering purposes. In order to address this problem, a novel MKL method is proposed in this article, namely, structure-preserving multiple kernel clustering (SPMKC). Specifically, SPMKC proposes a new kernel affine weight strategy to learn an optimal consensus kernel from a predefined kernel pool, which can assign a suitable weight for each base kernel automatically. Furthermore, SPMKC proposes a kernel group self-expressiveness term and a kernel adaptive local structure learning term to preserve the global and local structure of the input data in kernel space, respectively, rather than the original space. In addition, an efficient algorithm is proposed to solve the resulting unified objective function, which iteratively updates the consensus kernel and the affinity graph so that collaboratively promoting each of them to reach the optimum condition. Experiments on both image and text clustering demonstrate that SPMKC outperforms the state-of-the-art MKL clustering methods in terms of clustering performance and computational cost.

中文翻译:

同时为多个内核聚类保留全局和局部图结构。

通常公认多核学习(MKL)在处理非线性聚类问题方面比单核学习(SKL)更好,这主要是由于MKL避免了选择和调整预定义的内核。通过集成自我表达学习框架,基于图的MKL子空间聚类最近引起了相当大的关注。但是,以前的MKL方法在很大程度上忽略了内核空间中数据的图结构,这是出于频谱聚类目的而构造亲和图的关键概念。为了解决这个问题,本文提出了一种新颖的MKL方法,即保留结构的多内核聚类(SPMKC)。具体来说,SPMKC提出了一种新的内核仿射权重策略,以从预定义的内核池中学习最佳共识内核,可以自动为每个基本内核分配合适的权重。此外,SPMKC提出了一个内核组自表达项和一个内核自适应局部结构学习项,以分别在内核空间而不是原始空间中保留输入数据的全局和局部结构。此外,提出了一种有效的算法来求解由此产生的统一目标函数,该算法迭代更新共识内核和亲和图,以便共同推动它们各自达到最佳条件。图像和文本聚类的实验表明,在聚类性能和计算成本方面,SPMKC优于最新的MKL聚类方法。SPMKC提出了一个内核组自表达术语和一个内核自适应局部结构学习术语,以分别在内核空间而不是原始空间中保留输入数据的全局和局部结构。此外,提出了一种有效的算法来解决由此产生的统一目标函数,该算法迭代更新共识内核和亲和图,以便共同推动它们各自达到最佳条件。图像和文本聚类的实验表明,在聚类性能和计算成本方面,SPMKC优于最新的MKL聚类方法。SPMKC提出了一个内核组自表达术语和一个内核自适应局部结构学习术语,以分别在内核空间而不是原始空间中保留输入数据的全局和局部结构。此外,提出了一种有效的算法来解决由此产生的统一目标函数,该算法迭代更新共识内核和亲和图,以便共同推动它们各自达到最佳条件。关于图像和文本聚类的实验表明,在聚类性能和计算成本方面,SPMKC优于最新的MKL聚类方法。提出了一种有效的算法来解决由此产生的统一目标函数,该算法迭代更新共识内核和亲和图,从而共同促进它们达到最佳条件。关于图像和文本聚类的实验表明,在聚类性能和计算成本方面,SPMKC优于最新的MKL聚类方法。提出了一种有效的算法来解决由此产生的统一目标函数,该算法迭代更新共识内核和亲和图,从而共同促进它们达到最佳条件。图像和文本聚类的实验表明,在聚类性能和计算成本方面,SPMKC优于最新的MKL聚类方法。
更新日期:2020-05-14
down
wechat
bug