当前位置: X-MOL 学术Integr. Comput. Aided Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep support vector neural networks
Integrated Computer-Aided Engineering ( IF 5.8 ) Pub Date : 2020-07-03 , DOI: 10.3233/ica-200635
David Díaz-Vico 1 , Jesús Prada 2 , Adil Omari 3 , José Dorronsoro 2
Affiliation  

Kernel based Support Vector Machines, SVM, one of the most popular machine learning models, usually achieve top performances in two-class classification and regression problems. However, their training cost is at least quadratic on sample size, making them thus unsuitable for large sample problems.However, Deep Neural Networks (DNNs), with a cost linear on sample size, are able to solve big data problems relatively easily. In this work we propose to combine the advanced representations that DNNs can achieve in their last hidden layers with the hinge and ϵ insensitive losses that are used in two-class SVM classification and regression. We can thus have much better scalability while achieving performances comparable to those of SVMs. Moreover, we will also show that the resulting Deep SVM models are competitive with standard DNNs in two-class classification problems but have an edge in regression ones.

中文翻译:

深度支持向量神经网络

基于内核的支持向量机(SVM)是最流行的机器学习模型之一,通常在两类分类和回归问题上表现最佳。但是,它们的训练成本至少在样本数量上是平方的,因此不适合大样本问题。然而,深度神经网络(DNN)的样本数量成本与线性相关,能够相对轻松地解决大数据问题。在这项工作中,我们建议将DNN在其最后一个隐藏层中可以实现的高级表示与在两类SVM分类和回归中使用的铰链和ϵ不敏感损失相结合。因此,我们可以获得更好的可伸缩性,同时获得与SVM相当的性能。此外,
更新日期:2020-07-03
down
wechat
bug