当前位置: X-MOL 学术Knowl. Based Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multiview learning with variational mixtures of Gaussian processes
Knowledge-Based Systems ( IF 7.2 ) Pub Date : 2020-05-05 , DOI: 10.1016/j.knosys.2020.105990
Shiliang Sun , Jiachun Wang

Gaussian processes (GPs) are powerful Bayesian nonparametric tools widely used in probabilistic modeling, and the mixture of GPs (MGPs) were introduced afterwards to make data modeling more flexible. However, MGPs are not directly applicable to multiview learning. In order to improve the modeling ability of MGPs, in this paper, we propose a new framework of multiview learning for the MGPs and instantiate it for classification. We make the divergence between views as small as possible while ensuring that the posterior probability of each view is as large as possible. Specifically, we regularize the posterior distribution of latent variables with the consistency of posterior distributions of the latent functions between different views. Since it is intractable to solve the model analytically, the variational inference and optimization algorithms of the classification model are also presented in this paper. Experimental results on multiple real-world datasets have shown that the proposed method has outperformed the original MGP model and several state-of-the-art multiview learning methods, which indicate the effectiveness of the proposed multiview learning framework for MGPs.



中文翻译:

高斯过程的变分混合的多视图学习

高斯过程(GPs)是功能强大的贝叶斯非参数工具,广泛用于概率建模中,之后引入了GPs(MGP)混合以使数据建模更加灵活。但是,MGP不能直接应用于多视图学习。为了提高MGP的建模能力,本文提出了一种针对MGP的多视图学习新框架并将其实例化以进行分类。我们使视图之间的差异尽可能小,同时确保每个视图的后验概率尽可能大。具体来说,我们将潜变量的后验分布与不同视图之间的潜在函数的后验分布保持一致。由于解析模型很难解决,本文还提出了分类模型的变分推理和优化算法。在多个现实世界数据集上的实验结果表明,该方法优于原始的MGP模型和几种最新的多视图学习方法,表明该方法对MGP的有效性。

更新日期:2020-05-05
down
wechat
bug