当前位置: X-MOL 学术arXiv.cs.SI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multiple Kernel Representation Learning on Networks
arXiv - CS - Social and Information Networks Pub Date : 2021-06-09 , DOI: arxiv-2106.05057
Abdulkadir Celikkanat, Yanning Shen, Fragkiskos D. Malliaros

Learning representations of nodes in a low dimensional space is a crucial task with numerous interesting applications in network analysis, including link prediction, node classification, and visualization. Two popular approaches for this problem are matrix factorization and random walk-based models. In this paper, we aim to bring together the best of both worlds, towards learning node representations. In particular, we propose a weighted matrix factorization model that encodes random walk-based information about nodes of the network. The benefit of this novel formulation is that it enables us to utilize kernel functions without realizing the exact proximity matrix so that it enhances the expressiveness of existing matrix decomposition methods with kernels and alleviates their computational complexities. We extend the approach with a multiple kernel learning formulation that provides the flexibility of learning the kernel as the linear combination of a dictionary of kernels in data-driven fashion. We perform an empirical evaluation on real-world networks, showing that the proposed model outperforms baseline node embedding algorithms in downstream machine learning tasks.

中文翻译:

网络上的多核表示学习

学习低维空间中节点的表示是一项关键任务,在网络分析中有许多有趣的应用,包括链接预测、节点分类和可视化。这个问题的两种流行方法是矩阵分解和基于随机游走的模型。在本文中,我们的目标是将两全其美,以学习节点表示。特别是,我们提出了一种加权矩阵分解模型,该模型对有关网络节点的基于随机游走的信息进行编码。这种新颖公式的好处在于,它使我们能够在不实现精确邻近矩阵的情况下利用核函数,从而增强了现有的带有核的矩阵分解方法的表达能力,并降低了它们的计算复杂性。我们使用多核学习公式扩展了该方法,该公式提供了将内核学习为数据驱动方式中内核字典的线性组合的灵活性。我们对现实世界的网络进行了实证评估,表明所提出的模型在下游机器学习任务中优于基线节点嵌入算法。
更新日期:2021-06-10
down
wechat
bug