当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimizing Evaluation Metrics for Multitask Learning via the Alternating Direction Method of Multipliers
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2017-03-01 , DOI: 10.1109/tcyb.2017.2670608
Ge-Yang Ke , Yan Pan , Jian Yin , Chang-Qin Huang

Multitask learning (MTL) aims to improve the generalization performance of multiple tasks by exploiting the shared factors among them. Various metrics (e.g., F-score, area under the ROC curve) are used to evaluate the performances of MTL methods. Most existing MTL methods try to minimize either the misclassified errors for classification or the mean squared errors for regression. In this paper, we propose a method to directly optimize the evaluation metrics for a large family of MTL problems. The formulation of MTL that directly optimizes evaluation metrics is the combination of two parts: 1) a regularizer defined on the weight matrix over all tasks, in order to capture the relatedness of these tasks and 2) a sum of multiple structured hinge losses, each corresponding to a surrogate of some evaluation metric on one task. This formulation is challenging in optimization because both of its parts are nonsmooth. To tackle this issue, we propose a novel optimization procedure based on the alternating direction scheme of multipliers, where we decompose the whole optimization problem into a subproblem corresponding to the regularizer and another subproblem corresponding to the structured hinge losses. For a large family of MTL problems, the first subproblem has closed-form solutions. To solve the second subproblem, we propose an efficient primal-dual algorithm via coordinate ascent. Extensive evaluation results demonstrate that, in a large family of MTL problems, the proposed MTL method of directly optimization evaluation metrics has superior performance gains against the corresponding baseline methods.

中文翻译:


通过乘子交替方向法优化多任务学习的评估指标



多任务学习(MTL)旨在通过利用多个任务之间的共享因素来提高多个任务的泛化性能。各种指标(例如,F 分数、ROC 曲线下面积)用于评估 MTL 方法的性能。大多数现有的 MTL 方法都试图最小化分类的错误分类误差或回归的均方误差。在本文中,我们提出了一种直接优化大量 MTL 问题的评估指标的方法。直接优化评估指标的 MTL 的公式是两部分的组合:1)在所有任务的权重矩阵上定义的正则化器,以捕获这些任务的相关性;2)多个结构化铰链损失的总和,每个对应于一项任务的某些评估指标的替代。该公式在优化方面具有挑战性,因为它的两个部分都是非光滑的。为了解决这个问题,我们提出了一种基于乘法器交替方向方案的新颖优化过程,其中我们将整个优化问题分解为与正则化器相对应的子问题和与结构化铰链损失相对应的另一个子问题。对于一大类 MTL 问题,第一个子问题具有封闭式解。为了解决第二个子问题,我们提出了一种通过坐标上升的有效原对偶算法。广泛的评估结果表明,在一大类 MTL 问题中,所提出的直接优化评估指标的 MTL 方法相对于相应的基线方法具有优越的性能增益。
更新日期:2017-03-01
down
wechat
bug