当前位置: X-MOL 学术Comput. Intell. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Query-Specific Deep Embedding of Content-Rich Network.
Computational Intelligence and Neuroscience Pub Date : 2020-08-25 , DOI: 10.1155/2020/5943798
Yue Li 1 , Hongqi Wang 1 , Liqun Yu 1 , Sarah Yvonne Cooper 2 , Jing-Yan Wang 3
Affiliation  

In this paper, we propose to embed a content-rich network for the purpose of similarity searching for a query node. In this network, besides the information of the nodes and edges, we also have the content of each node. We use the convolutional neural network (CNN) to represent the content of each node and then use the graph convolutional network (GCN) to further represent the node by merging the representations of its neighboring nodes. The GCN output is further fed to a deep encoder-decoder model to convert each node to a Gaussian distribution and then convert back to its node identity. The dissimilarity between the two nodes is measured by the Wasserstein distance between their Gaussian distributions. We define the nodes of the network to be positives if they are relevant to the query node and negative if they are irrelevant. The labeling of the positives/negatives is based on an upper bound and a lower bound of the Wasserstein distances between the candidate nodes and the query nodes. We learn the parameters of CNN, GCN, encoder-decoder model, Gaussian distributions, and the upper bound and lower bounds jointly. The learning problem is modeled as a minimization problem to minimize the losses of node identification, network structure preservation, positive/negative query-specific relevance-guild distance, and model complexity. An iterative algorithm is developed to solve the minimization problem. We conducted experiments over benchmark networks, especially innovation networks, to verify the effectiveness of the proposed method and showed its advantage over the state-of-the-art methods.

中文翻译:

内容丰富网络的特定于查询的深度嵌入。

在本文中,我们建议嵌入一个内容丰富的网络,以实现对查询节点的相似性搜索。在这个网络中,除了节点和边缘的信息外,我们还拥有每个节点的内容。我们使用卷积神经网络(CNN)表示每个节点的内容,然后使用图卷积网络(GCN)通过合并其相邻节点的表示进一步表示该节点。GCN输出进一步馈入深度编码器-解码器模型,以将每个节点转换为高斯分布,然后转换回其节点标识。两个节点之间的相异性通过其高斯分布之间的Wasserstein距离来衡量。如果网络节点与查询节点相关,则将其定义为正;如果网络节点与节点无关,则将其定义为负。正/负的标记基于候选节点和查询节点之间的Wasserstein距离的上限和下限。我们共同学习CNN,GCN,编码器-解码器模型,高斯分布以及上限和下限的参数。学习问题被建模为最小化问题,以最大程度地减少节点标识,网络结构保留,正/负查询特定的关联行会距离和模型复杂性的损失。开发了一种迭代算法来解决最小化问题。我们在基准网络(尤其是创新网络)上进行了实验,以验证所提出方法的有效性,并展示了其相对于最新方法的优势。
更新日期:2020-08-26
down
wechat
bug