当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Differentiable Bi-Sparse Multi-View Co-Clustering
IEEE Transactions on Signal Processing ( IF 4.6 ) Pub Date : 2021-08-06 , DOI: 10.1109/tsp.2021.3101979
Shide Du , Zhanghui Liu , Zhaoliang Chen , Wenyuan Yang , Shiping Wang

Deep multi-view clustering utilizes neural networks to extract the potential peculiarities of complementarity and consistency information among multi-view features. This can obtain a consistent representation that improves clustering performance. Although a multitude of deep multi-view clustering approaches have been proposed, most lack theoretic interpretability while maintaining the advantages of good performance. In this paper, we propose an effective differentiable network with alternating iterative optimization for multi-view co-clustering termed differentiable bi-sparse multi-view co-clustering (DBMC) and an extension named elevated DBMC (EDBMC). The proposed methods are transformed into equivalent deep networks based on the constructed objective loss functions. They have the advantages of strong interpretability of the classical machine learning methods and the superior performance of deep networks. Moreover, DBMC and EDBMC can learn a joint and consistent collaborative representation from multi-source features and guarantee sparsity between multi-view feature space and single-view sample space. Meanwhile, they can be converted into deep differentiable network frameworks with block-wise iterative training. Correspondingly, we design two three-step iterative differentiable networks to resolve resultant optimization problems with theoretically guaranteed convergence. Extensive experiments on six multi-view benchmark datasets demonstrate that the proposed frameworks outperform other state-of-the-art multi-view clustering methods.

中文翻译:


可微双稀疏多视图共聚类



深度多视图聚类利用神经网络来提取多视图特征之间的互补性和一致性信息的潜在特性。这可以获得一致的表示,从而提高聚类性能。尽管已经提出了多种深度多视图聚类方法,但大多数在保持良好性能优势的同时缺乏理论可解释性。在本文中,我们提出了一种有效的可微网络,具有针对多视图共聚类的交替迭代优化,称为可微双稀疏多视图共聚类(DBMC)和名为提升DBMC(EDBMC)的扩展。基于构建的目标损失函数,所提出的方法被转化为等效的深度网络。它们具有经典机器学习方法的可解释性强和深度网络的优越性能的优点。此外,DBMC和EDBMC可以从多源特征中学习联合且一致的协作表示,并保证多视图特征空间和单视图样本空间之间的稀疏性。同时,它们可以通过分块迭代训练转换为深度可微网络框架。相应地,我们设计了两个三步迭代可微网络来解决由此产生的优化问题,并在理论上保证收敛。对六个多视图基准数据集的大量实验表明,所提出的框架优于其他最先进的多视图聚类方法。
更新日期:2021-08-06
down
wechat
bug