当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hyperlink regression via Bregman divergence.
Neural Networks ( IF 6.0 ) Pub Date : 2020-04-02 , DOI: 10.1016/j.neunet.2020.03.026
Akifumi Okuno 1 , Hidetoshi Shimodaira 2
Affiliation  

A collection of U(∈N) data vectors is called a U-tuple, and the association strength among the vectors of a tuple is termed as the hyperlink weight, that is assumed to be symmetric with respect to permutation of the entries in the index. We herein propose Bregman hyperlink regression (BHLR), which learns a user-specified symmetric similarity function such that it predicts the tuple's hyperlink weight from data vectors stored in the U-tuple. BHLR is a simple and general framework for hyper-relational learning, that minimizes Bregman-divergence (BD) between the hyperlink weights and estimated similarities defined for the corresponding tuples; BHLR encompasses various existing methods, such as logistic regression (U=1), Poisson regression (U=1), link prediction (U=2), and those for representation learning, such as graph embedding (U=2), matrix factorization (U=2), tensor factorization (U≥2), and their variants equipped with arbitrary BD. Nonlinear functions (e.g., neural networks), can be employed for the similarity functions. However, there are theoretical challenges such that some of different tuples of BHLR may share data vectors therein, unlike the i.i.d. setting of classical regression. We address these theoretical issues, and proved that BHLR equipped with arbitrary BD and U∈N is (P-1) statistically consistent, that is, it asymptotically recovers the underlying true conditional expectation of hyperlink weights given data vectors, and (P-2) computationally tractable, that is, it is efficiently computed by stochastic optimization algorithms using a novel generalized minibatch sampling procedure for hyper-relational data. Consequently, theoretical guarantees for BHLR including several existing methods, that have been examined experimentally, are provided in a unified manner.

中文翻译:

通过Bregman散度的超链接回归。

U(∈N)数据向量的集合称为U-元组,元组的向量之间的关联强度称为超链接权重,假定相对于索引中条目的排列对称。我们在此提出Bregman超链接回归(BHLR),该功能可学习用户指定的对称相似性函数,以便从U-tuple中存储的数据向量中预测出tuple的超链接权重。BHLR是用于超关系学习的简单且通用的框架,它最大程度地减少了超链接权重和为相应的元组定义的估计相似度之间的Bregman差异(BD)。BHLR涵盖了各种现有方法,例如逻辑回归(U = 1),泊松回归(U = 1),链接预测(U = 2)和用于表示学习的方法,例如图形嵌入(U = 2),矩阵分解(U = 2),张量分解(U≥2),以及它们的变体配备了任意BD。非线性函数(例如,神经网络)可以用于相似性函数。然而,存在理论上的挑战,使得BHLR的一些不同元组可以在其中共享数据向量,这与经典回归的iid设置不同。我们解决了这些理论问题,并证明配备有任意BD和U∈N的BHLR在统计上是(P-1)一致的,也就是说,它在给定数据向量的情况下渐近地恢复了超链接权重的基本真实条件期望,并且(P-2 )易于计算,也就是说,通过随机优化算法使用新颖的广义超小批量采样程序对超关系数据进行有效地计算。所以,
更新日期:2020-04-03
down
wechat
bug