当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transfer-Learning-Based Gaussian Mixture Model for Distributed Clustering.
IEEE Transactions on Cybernetics ( IF 11.8 ) Pub Date : 2023-10-17 , DOI: 10.1109/tcyb.2022.3177242
Rongrong Wang 1 , Shiyuan Han 1 , Jin Zhou 1 , Yuehui Chen 1 , Lin Wang 1 , Tao Du 1 , Ke Ji 1 , Ya-ou Zhao 1 , Kun Zhang 1
Affiliation  

Distributed clustering based on the Gaussian mixture model (GMM) has exhibited excellent clustering capabilities in peer-to-peer (P2P) networks. However, more iterative numbers and communication overhead are required to achieve the consensus in existing distributed GMM clustering algorithms. In addition, the truth that it cannot find a closed form for the update of parameters in GMM causes the imprecise clustering accuracy. To solve these issues, by utilizing the transfer learning technique, a general transfer distributed GMM clustering framework is exploited to promote the clustering performance and accelerate the clustering convergence. In this work, each node is treated as both the source domain and the target domain, and these nodes can learn from each other to complete the clustering task in distributed P2P networks. Based on this framework, the transfer distributed expectation-maximization algorithm with the fixed learning rate is first presented for data clustering. Then, an improved version is designed to obtain the stable clustering accuracy, in which an adaptive transfer learning strategy is adopted to adjust the learning rate automatically instead of a fixed value. To demonstrate the extensibility of the proposed framework, a representative GMM clustering method, the entropy-type classification maximum-likelihood algorithm, is further extended to the transfer distributed counterpart. Experimental results verify the effectiveness of the presented algorithms in contrast with the existing GMM clustering approaches.

中文翻译:

基于迁移学习的分布式集群高斯混合模型。

基于高斯混合模型(GMM)的分布式聚类在对等(P2P)网络中表现出了优异的聚类能力。然而,现有的分布式GMM聚类算法需要更多的迭代次数和通信开销才能达成共识。另外,GMM无法找到参数更新的封闭形式,导致聚类精度不精确。为了解决这些问题,利用迁移学习技术,采用通用迁移分布式 GMM 聚类框架来提高聚类性能并加速聚类收敛。在这项工作中,每个节点都被视为源域和目标域,这些节点可以相互学习以完成分布式P2P网络中的聚类任务。基于该框架,首次提出了用于数据聚类的固定学习率的迁移分布式期望最大化算法。然后,设计了一种改进版本以获得稳定的聚类精度,其中采用自适应迁移学习策略来自动调整学习率而不是固定值。为了证明所提出框架的可扩展性,一种代表性的 GMM 聚类方法,即熵型分类最大似然算法,被进一步扩展到传输分布式对应物。实验结果验证了所提出算法与现有 GMM 聚类方法相比的有效性。
更新日期:2022-06-10
down
wechat
bug