当前位置: X-MOL 学术Appl. Soft Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Online support vector quantile regression for the dynamic time series with heavy-tailed noise
Applied Soft Computing ( IF 7.2 ) Pub Date : 2021-06-07 , DOI: 10.1016/j.asoc.2021.107560
Yafen Ye , Yuanhai Shao , Chunna Li , Xiangyu Hua , Yanru Guo

In this paper, we propose an online support vector quantile regression approach with an ε-insensitive pinball loss function, called Online-SVQR, for dynamic time series with heavy-tailed noise. Online-SVQR is robust to heavy-tailed noise, as it can control the negative influence of heavy-tailed noise by using a quantile parameter. By using an incremental learning algorithm to update the new samples, the coefficients of Online-SVQR reflect the dynamic information in the examined time series. During each incremental training process, the nonsupport vector is ignored while the support vector continues training with new updated samples. Online-SVQR can select useful training samples and discard irrelevant samples. As a result, the training speed of Online-SVQR is accelerated. Experimental results on one artificial dataset and three real-world datasets indicate that Online-SVQR outperforms ε-support vector quantile regression in terms of both sample selection ability and training speed.



中文翻译:

具有重尾噪声的动态时间序列的在线支持向量分位数回归

在本文中,我们提出了一种在线支持向量分位数回归方法 ε- 不敏感的弹球损失函数,称为 Online-SVQR,用于具有重尾噪声的动态时间序列。Online-SVQR 对重尾噪声具有鲁棒性,因为它可以通过使用分位数参数来控制重尾噪声的负面影响。通过使用增量学习算法更新新样本,Online-SVQR 的系数反映了检查时间序列中的动态信息。在每次增量训练过程中,非支持向量被忽略,而支持向量继续使用新的更新样本进行训练。Online-SVQR 可以选择有用的训练样本并丢弃不相关的样本。从而加快了 Online-SVQR 的训练速度。一个人工数据集和三个真实数据集的实验结果表明 Online-SVQR 优于ε- 在样本选择能力和训练速度方面支持向量分位数回归。

更新日期:2021-06-17
down
wechat
bug