当前位置: X-MOL 学术Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Negative sampling strategies for contrastive self-supervised learning of graph representations
Signal Processing ( IF 3.4 ) Pub Date : 2021-09-01 , DOI: 10.1016/j.sigpro.2021.108310
Hakim Hafidi 1, 2 , Mounir Ghogho 1 , Philippe Ciblat 2 , Ananthram Swami 3
Affiliation  

Contrastive learning has become a successful approach for learning powerful text and image representations in a self-supervised manner. Contrastive frameworks learn to distinguish between representations coming from augmentations of the same data point (positive pairs) and those of other (negative) examples. Recent studies aim at extending methods from contrastive learning to graph data. In this work, we propose a general framework for learning node representations in a self supervised manner called Graph Constrastive Learning (GraphCL). It learns node embeddings by maximizing the similarity between the nodes representations of two randomly perturbed versions of the same graph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. We investigate different standard and new negative sampling strategies as well as a comparison without negative sampling approach. We demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks in both transductive and inductive learning setups.



中文翻译:

图表示的对比自监督学习的负采样策略

对比学习已经成为一种以自我监督的方式学习强大的文本和图像表示的成功方法。对比框架学习来自同一数据点的增扩(即将表示之间进行区分对)和那些其它(的) 例子。最近的研究旨在将方法从对比学习扩展到图形数据。在这项工作中,我们提出了一种以自监督方式学习节点表示的通用框架,称为图对比学习(GraphCL)。它通过最大化同一图的两个随机扰动版本的节点表示之间的相似性来学习节点嵌入。我们使用图神经网络来生成同一节点的两个表示,并利用对比学习损失来最大化它们之间的一致性。我们研究了不同的标准和新的负采样策略,以及没有负采样方法的比较。

更新日期:2021-09-07
down
wechat
bug