当前位置: X-MOL 学术arXiv.cs.SI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GCN for HIN via Implicit Utilization of Attention and Meta-paths
arXiv - CS - Social and Information Networks Pub Date : 2020-07-06 , DOI: arxiv-2007.02643
Di Jin, Zhizhi Yu, Dongxiao He, Carl Yang, Philip S. Yu and Jiawei Han

Heterogeneous information network (HIN) embedding, aiming to map the structure and semantic information in a HIN to distributed representations, has drawn considerable research attention. Graph neural networks for HIN embeddings typically adopt a hierarchical attention (including node-level and meta-path-level attentions) to capture the information from meta-path-based neighbors. However, this complicated attention structure often cannot achieve the function of selecting meta-paths due to severe overfitting. Moreover, when propagating information, these methods do not distinguish direct (one-hop) meta-paths from indirect (multi-hop) ones. But from the perspective of network science, direct relationships are often believed to be more essential, which can only be used to model direct information propagation. To address these limitations, we propose a novel neural network method via implicitly utilizing attention and meta-paths, which can relieve the severe overfitting brought by the current over-parameterized attention mechanisms on HIN. We first use the multi-layer graph convolutional network (GCN) framework, which performs a discriminative aggregation at each layer, along with stacking the information propagation of direct linked meta-paths layer-by-layer, realizing the function of attentions for selecting meta-paths in an indirect way. We then give an effective relaxation and improvement via introducing a new propagation operation which can be separated from aggregation. That is, we first model the whole propagation process with well-defined probabilistic diffusion dynamics, and then introduce a random graph-based constraint which allows it to reduce noise with the increase of layers. Extensive experiments demonstrate the superiority of the new approach over state-of-the-art methods.

中文翻译:

通过隐式利用注意力和元路径的 HIN 的 GCN

异构信息网络(HIN)嵌入,旨在将 HIN 中的结构和语义信息映射到分布式表示,引起了相当多的研究关注。用于 HIN 嵌入的图神经网络通常采用分层注意力(包括节点级和元路径级注意力)来捕获来自基于元路径的邻居的信息。然而,这种复杂的注意力结构由于严重的过拟合,往往无法实现选择元路径的功能。此外,在传播信息时,这些方法不区分直接(一跳)元路径和间接(多跳)元路径。但从网络科学的角度来看,直接关系通常被认为更为重要,只能用于对直接信息传播进行建模。为了解决这些限制,我们通过隐式利用注意力和元路径提出了一种新的神经网络方法,可以缓解当前过度参数化注意力机制对 HIN 带来的严重过拟合。我们首先使用多层图卷积网络(GCN)框架,在每一层进行判别聚合,同时将直接链接的元路径的信息传播逐层叠加,实现了注意力选择元的功能-paths 以间接方式。然后,我们通过引入一种可以与聚合分离的新传播操作来提供有效的放松和改进。也就是说,我们首先用定义明确的概率扩散动力学对整个传播过程进行建模,然后引入基于随机图的约束,使其随着层数的增加而降低噪声。大量实验证明了新方法优于最先进方法。
更新日期:2020-07-07
down
wechat
bug