当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Leveraging maximum entropy and correlation on latent factors for learning representations.
Neural Networks ( IF 7.8 ) Pub Date : 2020-08-05 , DOI: 10.1016/j.neunet.2020.07.027
Zhicheng He 1 , Jie Liu 1 , Kai Dang 1 , Fuzhen Zhuang 2 , Yalou Huang 3
Affiliation  

Many tasks involve learning representations from matrices, and Non-negative Matrix Factorization (NMF) has been widely used due to its excellent interpretability. Through factorization, sample vectors are reconstructed as additive combinations of latent factors, which are represented as non-negative distributions over the raw input features. NMF models are significantly affected by latent factors’ distribution characteristics and the correlations among them. And NMF models are faced with the challenge of learning robust latent factor. To this end, we propose to learn representations with an awareness of the semantic quality evaluated from the aspects of intra- and inter-factors. On the one hand, a Maximum Entropy-based function is devised for the intra-factor semantic quality. On the other hand, the semantic uniqueness is evaluated via inter-factor correlation, which reinforces for the aim of semantic compactness. Moreover, we present a novel non-linear NMF framework. The learning algorithm is presented and the convergence is theoretically analyzed and proved. Extensive experimental results on multiple datasets demonstrate that our method can be successfully applied to representative NMF models and boost performances over state-of-the-art models.



中文翻译:

利用最大熵和潜在因子的相关性来学习表示。

许多任务涉及从矩阵学习表示,并且非负矩阵因式分解(NMF)由于其出色的可解释性而被广泛使用。通过分解,可以将样本向量重构为潜在因子的加法组合,这些隐含因子表示为原始输入特征上的非负分布。NMF模型受潜在因素的分布特征及其之间的相关性影响很大。NMF模型面临着学习强大的潜在因子的挑战。为此,我们建议从内部和内部因素的角度来学习具有评估的语义质量的表示。一方面,针对要素内语义质量设计了基于最大熵的函数。另一方面,语义唯一性是通过要素间相关性进行评估的,从而增强了语义紧凑性的目的。此外,我们提出了一种新颖的非线性NMF框架。提出了学习算法,并对收敛性进行了理论分析和证明。在多个数据集上的大量实验结果表明,我们的方法可以成功地应用于代表性的NMF模型,并且可以比最新模型提高性能。

更新日期:2020-08-05
down
wechat
bug