当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
R-CTSVM+: Robust capped L1-norm twin support vector machine with privileged information
Information Sciences ( IF 8.1 ) Pub Date : 2021-06-07 , DOI: 10.1016/j.ins.2021.06.003
Yanmeng Li , Huaijiang Sun , Wenzhu Yan , Qiongjie Cui

In the new paradigm, learning using privileged information (LUPI) creates a more informative strategy for tasks to achieve better prediction. SVM based methods including SVM+ and TSVM+, have achieved considerable success in LUPI on the clean data. However, these methods typically suffer from the noise and outliers contained in the data, which leads to larger fluctuations in performance. To handle this problem, in this paper, we propose a novel Robust Capped L1-norm Twin Support Vector Machine with Privileged Information (R-CTSVM+). The proposed pair of regularization functions (up- and down-bound) can definitely help to increase the learning model’s tolerance to disturbance, because the up-bound function aims to maximize the lower bound of the perturbation of both main feature and privilege feature, meanwhile, the down-bound function aims to minimize the upper bound of the perturbation of both main feature and privilege feature. Moreover, as the widely employed squared L2-norm distance typically exaggerates the impact of outliers, we adopt the capped L1 regularized distance to further guarantee the robustness of the model. We present that the resulted optimization problem is theoretically converged and can be solved using an effective alternating optimization procedure. Experimental results on benchmark datasets indicate that the proposed robust model can produce superior performance in the case where data samples contain much noise and outliers.



中文翻译:

R-CTSVM+:具有特权信息的鲁棒封顶 L 1 -范数孪生支持向量机

在新范式中,使用特权信息 (LUPI) 进行学习为任务创建了一种信息量更大的策略,以实现更好的预测。基于 SVM 的方法包括 SVM+ 和 TSVM+,在干净数据的 LUPI 中取得了相当大的成功。但是,这些方法通常会受到数据中包含的噪声和异常值的影响,从而导致性能出现较大波动。为了解决这个问题,在本文中,我们提出了一种新颖的 Robust Capped L 1-具有特权信息的规范孪生支持向量机 (R-CTSVM+)。提出的一对正则化函数(上界和下界)肯定有助于提高学习模型对干扰的容忍度,因为上界函数旨在最大化主要特征和特权特征扰动的下界,同时,下界函数旨在最小化主要特征和特权特征的扰动上限。此外,由于广泛使用的平方 L 2 -范数距离通常会夸大异常值的影响,我们采用上限 L 1正则化距离来进一步保证模型的鲁棒性。我们提出了由此产生的优化问题理论上是收敛的,可以使用有效的交替优化程序来解决。在基准数据集上的实验结果表明,在数据样本包含大量噪声和异常值的情况下,所提出的鲁棒模型可以产生卓越的性能。

更新日期:2021-06-18
down
wechat
bug