当前位置: X-MOL 学术Ann. Math. Artif. Intel. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Kernel collaborative online algorithms for multi-task learning
Annals of Mathematics and Artificial Intelligence ( IF 1.2 ) Pub Date : 2019-08-08 , DOI: 10.1007/s10472-019-09650-w
A. Aravindh , S. S. Shiju , S. Sumitra

In many real time applications, we often have to deal with classification, regression or clustering problems that involve multiple tasks. The conventional machine learning approaches solve these tasks independently by ignoring the task relatedness. In multi-task learning (MTL), these related tasks are learned simultaneously by extracting and utilizing the shared information across tasks. This approach of learning related tasks together increases the sample size for each task and improves the generalization performance. Thus MTL is especially beneficial when the training size is small for each task. This paper describes multi-task learning using kernel online learning approach. As many real world applications are online in nature, development of efficient online learning techniques is very much needed. Since online learning processes only one data at a time, these techniques could be effectively applied on large data sets. The MTL model we developed involves a global function and a task specific function corresponding to each task. The cost function used for finding the task specific function makes use of the global model for incorporating the necessary information from other tasks. Such modeling strategies improve the generalization capacity of the model. The problem of finding global and task specific functions is formulated as two separate problems and at each step of arrival of new data, the global vector is solved at the first instant and its information is used to update the task specific vector. The updation rule for the task specific function involves the approximation of global components using task specific components by means of projection. We applied the developed frame work on real world problems and the results were found to be promising.

中文翻译:

用于多任务学习的内核协作在线算法

在许多实时应用中,我们经常要处理涉及多个任务的分类、回归或聚类问题。传统的机器学习方法通​​过忽略任务相关性来独立解决这些任务。在多任务学习 (MTL) 中,通过提取和利用跨任务的共享信息来同时学习这些相关任务。这种将相关任务一起学习的方法增加了每个任务的样本量并提高了泛化性能。因此,当每个任务的训练规模较小时,MTL 尤其有益。本文描述了使用核在线学习方法的多任务学习。由于许多现实世界的应用程序本质上都是在线的,因此非常需要开发高效的在线学习技术。由于在线学习一次只处理一个数据,因此这些技术可以有效地应用于大型数据集。我们开发的 MTL 模型涉及一个全局函数和一个与每个任务对应的任务特定函数。用于查找任务特定函数的成本函数利用全局模型来合并来自其他任务的必要信息。这种建模策略提高了模型的泛化能力。寻找全局函数和任务特定函数的问题被表述为两个独立的问题,在新数据到达的每一步,全局向量在第一时刻被求解,其信息用于更新任务特定向量。任务特定函数的更新规则涉及通过投影使用任务特定分量来逼近全局分量。我们将开发的框架应用于现实世界的问题,结果被发现是有希望的。
更新日期:2019-08-08
down
wechat
bug