当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Efficacy of Regularized Multitask Learning Based on SVM Models.
IEEE Transactions on Cybernetics ( IF 11.8 ) Pub Date : 2022-08-22 , DOI: 10.1109/tcyb.2022.3196308
Shaohan Chen 1 , Zhou Fang 2 , Sijie Lu 1 , Chuanhou Gao 1
Affiliation  

This article investigates the efficacy of a regularized multitask learning (MTL) framework based on SVM (M-SVM) to answer whether MTL always provides reliable results and how MTL outperforms independent learning. We first find that the M-SVM is Bayes risk consistent in the limit of a large sample size. This implies that despite the task dissimilarities, the M-SVM always produces a reliable decision rule for each task in terms of the misclassification error when the data size is large enough. Furthermore, we find that the task-interaction vanishes as the data size goes to infinity, and the convergence rates of the M-SVM and its single-task counterpart have the same upper bound. The former suggests that the M-SVM cannot improve the limit classifier's performance; based on the latter, we conjecture that the optimal convergence rate is not improved when the task number is fixed. As a novel insight into MTL, our theoretical and experimental results achieved an excellent agreement that the benefit of the MTL methods lies in the improvement of the preconvergence-rate (PCR) factor (to be denoted in Section III) rather than the convergence rate. Moreover, this improvement of PCR factors is more significant when the data size is small. In addition, our experimental results of five other MTL methods demonstrate the generality of this new insight.

中文翻译:

基于 SVM 模型的正则化多任务学习的功效。

本文研究了基于 SVM (M-SVM) 的正则化多任务学习 (MTL) 框架的功效,以回答 MTL 是否总是提供可靠的结果以及 MTL 如何优于独立学习。我们首先发现 M-SVM 在大样本量的限制下是贝叶斯风险一致的。这意味着尽管任务不同,当数据量足够大时,M-SVM 总是根据误分类错误为每个任务生成可靠的决策规则。此外,我们发现随着数据大小趋于无穷大,任务交互消失,并且 M-SVM 及其单任务对应物的收敛速度具有相同的上限。前者表明M-SVM不能提高极限分类器的性能;基于后者,我们推测,当任务数固定时,最优收敛速度并没有提高。作为对 MTL 的新见解,我们的理论和实验结果非常一致地表明,MTL 方法的好处在于提高预收敛率 (PCR) 因子(将在第三节中表示)而不是收敛速度。此外,当数据量较小时,PCR 因子的这种改进更为显着。此外,我们对其他五种 MTL 方法的实验结果证明了这种新见解的普遍性。我们的理论和实验结果非常一致地表明,MTL 方法的好处在于改进预收敛率 (PCR) 因子(将在第三节中表示)而不是收敛率。此外,当数据量较小时,PCR 因子的这种改进更为显着。此外,我们对其他五种 MTL 方法的实验结果证明了这种新见解的普遍性。我们的理论和实验结果非常一致地表明,MTL 方法的好处在于改进预收敛率 (PCR) 因子(将在第三节中表示)而不是收敛率。此外,当数据量较小时,PCR 因子的这种改进更为显着。此外,我们对其他五种 MTL 方法的实验结果证明了这种新见解的普遍性。
更新日期:2022-08-22
down
wechat
bug