当前位置: X-MOL 学术Mach. Learn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint consensus and diversity for multi-view semi-supervised classification
Machine Learning ( IF 7.5 ) Pub Date : 2019-10-07 , DOI: 10.1007/s10994-019-05844-9
Wenzhang Zhuge , Chenping Hou , Shaoliang Peng , Dongyun Yi

As data can be acquired in an ever-increasing number of ways, multi-view data is becoming more and more available. Considering the high price of labeling data in many machine learning applications, we focus on multi-view semi-supervised classification problem. To address this problem, in this paper, we propose a method called joint consensus and diversity for multi-view semi-supervised classification, which learns a common label matrix for all training samples and view-specific classifiers simultaneously. A novel classification loss named probabilistic square hinge loss is proposed, which avoids the incorrect penalization problem and characterizes the contribution of training samples according to its uncertainty. Power mean is introduced to incorporate the losses of different views, which contains the auto-weighted strategy as a special case and distinguishes the importance of various views. To solve the non-convex minimization problem, we prove that its solution can be obtained from another problem with introduced variables. And an efficient algorithm with proved convergence is developed for optimization. Extensive experimental results on nine datasets demonstrate the effectiveness of the proposed algorithm.

中文翻译:

多视图半监督分类的联合共识和多样性

由于可以通过越来越多的方式获取数据,因此多视图数据变得越来越可用。考虑到许多机器学习应用中标注数据的高昂价格,我们专注于多视图半监督分类问题。为了解决这个问题,在本文中,我们提出了一种称为多视图半监督分类的联合共识和多样性的方法,该方法同时为所有训练样本和特定于视图的分类器学习一个公共标签矩阵。提出了一种新的分类损失,称为概率平方铰链损失,它避免了不正确的惩罚问题,并根据其不确定性来表征训练样本的贡献。引入幂均值来合并不同观点的损失,其中包含作为特例的自动加权策略,并区分各种视图的重要性。为了解决非凸最小化问题,我们证明了它的解决方案可以从另一个引入变量的问题中获得。并且开发了一种具有证明收敛性的有效算法以进行优化。在九个数据集上的大量实验结果证明了所提出算法的有效性。
更新日期:2019-10-07
down
wechat
bug