Pattern Recognition Letters ( IF 3.9 ) Pub Date : 2021-05-05 , DOI: 10.1016/j.patrec.2021.04.021 Franco Manessi , Alessandro Rozza
Self-supervised learning is currently gaining a lot of attention, as it allows neural networks to learn robust representations from large quantities of unlabeled data. Additionally, multi-task learning can further improve representation learning by training networks simultaneously on related tasks, leading to significant performance improvements. In this paper, we propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion. Since Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points, we use them as a building block to achieve competitive results on standard semi-supervised graph classification tasks.
中文翻译:
具有多个自我监督辅助任务的基于图的神经网络模型
自我监督学习目前正在引起广泛关注,因为它允许神经网络从大量未标记的数据中学习可靠的表示形式。此外,多任务学习可以通过同时对相关任务进行网络培训来进一步改善表示学习,从而显着提高性能。在本文中,我们提出了三种新颖的自我监督辅助任务,以多任务方式训练基于图的神经网络模型。由于图卷积网络是捕获结构化数据点之间关系的最有前途的方法之一,因此我们将它们用作在标准半监督图分类任务上获得竞争结果的基础。