当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-01-19 , DOI: arxiv-2001.06902
Simon Vandenhende, Stamatios Georgoulis and Luc Van Gool

In this paper, we argue about the importance of considering task interactions at multiple scales when distilling task information in a multi-task learning setup. In contrast to common belief, we show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales, and vice versa. We propose a novel architecture, namely MTI-Net, that builds upon this finding in three ways. First, it explicitly models task interactions at every scale via a multi-scale multi-modal distillation unit. Second, it propagates distilled task information from lower to higher scales via a feature propagation module. Third, it aggregates the refined task features from all scales via a feature aggregation unit to produce the final per-task predictions. Extensive experiments on two multi-task dense labeling datasets show that, unlike prior work, our multi-task model delivers on the full potential of multi-task learning, that is, smaller memory footprint, reduced number of calculations, and better performance w.r.t. single-task learning. The code is made publicly available: https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch.

中文翻译:

MTI-Net:用于多任务学习的多尺度任务交互网络

在本文中,我们讨论了在多任务学习设置中提取任务信息时在多个尺度上考虑任务交互的重要性。与普遍的看法相反,我们表明,在一定尺度上具有高亲和力的任务不能保证在其他尺度上保留这种行为,反之亦然。我们提出了一种新颖的架构,即 MTI-Net,它以三种方式建立在这一发现的基础上。首先,它通过多尺度多模态蒸馏单元在每个尺度上显式地模拟任务交互。其次,它通过特征传播模块将提取的任务信息从较低的尺度传播到较高的尺度。第三,它通过特征聚合单元从所有尺度聚合细化的任务特征,以产生最终的每任务预测。对两个多任务密集标记数据集的大量实验表明,与之前的工作不同,我们的多任务模型充分发挥了多任务学习的潜力,即更小的内存占用、减少的计算数量和更好的单任务学习性能。该代码已公开提供:https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch。
更新日期:2020-07-10
down
wechat
bug