当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Accelerating Sequential Minimal Optimization via Stochastic Subgradient Descent
IEEE Transactions on Cybernetics ( IF 11.8 ) Pub Date : 2019-02-05 , DOI: 10.1109/tcyb.2019.2893289
Bin Gu , Yingying Shan , Xin Quan , Guansheng Zheng

Sequential minimal optimization (SMO) is one of the most popular methods for solving a variety of support vector machines (SVMs). The shrinking and caching techniques are commonly used to accelerate SMO. An interesting phenomenon of SMO is that most of the computational time is wasted on the first half of iterations for building a good solution closing to the optimal. However, as we all know, the stochastic subgradient descent (SSGD) method is extremely fast for building a good solution. In this paper, we propose a generalized framework of accelerating SMO through SSGD for a variety of SVMs of binary classification, regression, ordinal regression, and so on. We also provide a deep insight about why SSGD can accelerate SMO. Experimental results on a variety of datasets and learning applications confirm that our method can effectively speed up SMO.

中文翻译:

通过随机次梯度下降加速顺序最小优化

顺序最小优化(SMO)是解决各种支持向量机(SVM)的最流行方法之一。收缩和缓存技术通常用于加速SMO。SMO的一个有趣现象是,大部分计算时间都浪费在迭代的前半部分上,以建立接近最佳值的良好解决方案。但是,众所周知,随机次梯度下降法(SSGD)对于建立良好的解决方案非常快。在本文中,我们针对二分类,回归,序数回归等各种SVM,提出了一种通过SSGD加速SMO的通用框架。我们还提供了关于SSGD为什么可以加速SMO的深刻见解。在各种数据集和学习应用程序上的实验结果证实,我们的方法可以有效地加快SMO的速度。
更新日期:2019-02-05
down
wechat
bug