当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Variational approximation error in non-negative matrix factorization.
Neural Networks ( IF 7.8 ) Pub Date : 2020-03-13 , DOI: 10.1016/j.neunet.2020.03.009
Naoki Hayashi 1
Affiliation  

Non-negative matrix factorization (NMF) is a knowledge discovery method that is used in many fields. Variational inference and Gibbs sampling methods for it are also well-known. However, the variational approximation error has not been clarified yet, because NMF is not statistically regular and the prior distribution used in variational Bayesian NMF (VBNMF) has zero or divergence points. In this paper, using algebraic geometrical methods, we theoretically analyze the difference in negative log evidence (a.k.a. free energy) between VBNMF and Bayesian NMF, i.e., the Kullback-Leibler divergence between the variational posterior and the true posterior. We derive an upper bound for the learning coefficient (a.k.a. the real log canonical threshold) in Bayesian NMF. By using the upper bound, we find a lower bound for the approximation error, asymptotically. The result quantitatively shows how well the VBNMF algorithm can approximate Bayesian NMF; the lower bound depends on the hyperparameters and the true non-negative rank. A numerical experiment demonstrates the theoretical result.

中文翻译:

非负矩阵分解中的变分近似误差。

非负矩阵分解(NMF)是一种知识发现方法,已在许多领域中使用。变分推理和吉布斯采样方法也是众所周知的。但是,由于NMF在统计上不规则,并且变分贝叶斯NMF(VBNMF)中使用的先验分布具有零或散点,因此尚未弄清变分近似误差。在本文中,我们采用代数几何方法,从理论上分析了VBNMF和贝叶斯NMF之间的负对数证据(又称自由能)之间的差异,即变验后验和真实后验之间的Kullback-Leibler散度。我们得出贝叶斯NMF中的学习系数(又称实数对数标准阈值)的上限。通过使用上限,我们找到了近似误差的下限,渐近地 结果定量地表明了VBNMF算法可以很好地近似贝叶斯NMF。下限取决于超参数和真实的非负秩。数值实验证明了理论结果。
更新日期:2020-03-16
down
wechat
bug