当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The benefits of target relations: A comparison of multitask extensions and classifier chains
Pattern Recognition ( IF 8 ) Pub Date : 2020-11-01 , DOI: 10.1016/j.patcog.2020.107507
Esra Adıyeke , Mustafa Gökçe Baydoğan

Abstract Multitask (multi-target or multi-output) learning (MTL) deals with simultaneous prediction of several outputs. MTL approaches rely on the optimization of a joint score function over the targets. However, defining a joint score in global models is problematic when the target scales are different. To address such problems, single target (i.e. local) learning strategies are commonly employed. Here we propose alternative tree-based learning strategies to handle the issue with target scaling in global models, and to identify the learning order for chaining operations in local models. In the first proposal, the problems with target scaling are resolved using alternative splitting strategies which consider the learning tasks in a multi-objective optimization framework. The second proposal deals with the problem of ordering in the chaining strategies. We introduce an alternative estimation strategy, minimum error chain policy, that gradually expands the input space using the estimations that approximate to true characteristics of outputs, namely out-of-bag estimations in tree-based ensemble framework. Our experiments on benchmark datasets illustrate the success of the proposed multitask extension of trees compared to the decision trees with de facto design especially for datasets with large number of targets. In line with that, minimum error chain policy improves the performance of the state-of-the-art chaining policies.

中文翻译:

目标关系的好处:多任务扩展和分类器链的比较

摘要 多任务(多目标或多输出)学习 (MTL) 处理多个输出的同时预测。MTL 方法依赖于对目标的联合评分函数的优化。然而,当目标尺度不同时,在全局模型中定义联合分数是有问题的。为了解决这些问题,通常采用单一目标(即本地)学习策略。在这里,我们提出了替代的基于树的学习策略来处理全局模型中目标缩放的问题,并确定局部模型中链接操作的学习顺序。在第一个提议中,目标缩放的问题使用替代拆分策略解决,该策略考虑了多目标优化框架中的学习任务。第二个建议处理链接策略中的排序问题。我们引入了一种替代估计策略,最小错误链策略,它使用接近输出真实特征的估计逐渐扩展输入空间,即基于树的集成框架中的袋外估计。我们在基准数据集上的实验表明,与具有事实上设计的决策树相比,所提出的树的多任务扩展是成功的,尤其是对于具有大量目标的数据集。与此一致,最小错误链策略提高了最先进的链策略的性能。即基于树的集成框架中的袋外估计。我们在基准数据集上的实验表明,与具有事实上设计的决策树相比,所提出的树的多任务扩展是成功的,尤其是对于具有大量目标的数据集。与此一致,最小错误链策略提高了最先进的链策略的性能。即基于树的集成框架中的袋外估计。我们在基准数据集上的实验表明,与具有事实上设计的决策树相比,所提出的树的多任务扩展是成功的,尤其是对于具有大量目标的数据集。与此一致,最小错误链策略提高了最先进的链策略的性能。
更新日期:2020-11-01
down
wechat
bug