当前位置:
X-MOL 学术
›
arXiv.cs.NE
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
The Bayesian Method of Tensor Networks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-01-01 , DOI: arxiv-2101.00245 Erdong Guo, David Draper
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-01-01 , DOI: arxiv-2101.00245 Erdong Guo, David Draper
Bayesian learning is a powerful learning framework which combines the
external information of the data (background information) with the internal
information (training data) in a logically consistent way in inference and
prediction. By Bayes rule, the external information (prior distribution) and
the internal information (training data likelihood) are combined coherently,
and the posterior distribution and the posterior predictive (marginal)
distribution obtained by Bayes rule summarize the total information needed in
the inference and prediction, respectively. In this paper, we study the
Bayesian framework of the Tensor Network from two perspective. First, we
introduce the prior distribution to the weights in the Tensor Network and
predict the labels of the new observations by the posterior predictive
(marginal) distribution. Since the intractability of the parameter integral in
the normalization constant computation, we approximate the posterior predictive
distribution by Laplace approximation and obtain the out-product approximation
of the hessian matrix of the posterior distribution of the Tensor Network
model. Second, to estimate the parameters of the stationary mode, we propose a
stable initialization trick to accelerate the inference process by which the
Tensor Network can converge to the stationary path more efficiently and stably
with gradient descent method. We verify our work on the MNIST, Phishing Website
and Breast Cancer data set. We study the Bayesian properties of the Bayesian
Tensor Network by visualizing the parameters of the model and the decision
boundaries in the two dimensional synthetic data set. For a application
purpose, our work can reduce the overfitting and improve the performance of
normal Tensor Network model.
中文翻译:
张量网络的贝叶斯方法
贝叶斯学习是一个强大的学习框架,它以逻辑上一致的方式在推理和预测中结合了数据的外部信息(背景信息)和内部信息(训练数据)。通过贝叶斯规则,将外部信息(先验分布)和内部信息(训练数据似然)相结合,贝叶斯规则获得的后验分布和后验预测(边际)分布总结了推理和预测所需的全部信息, 分别。在本文中,我们从两个角度研究了Tensor网络的贝叶斯框架。首先,我们将先验分布引入张量网络中的权重,并通过后验预测(边际)分布预测新观测值的标签。由于归一化常数计算中参数积分的难处理性,我们通过拉普拉斯逼近近似后验预测分布,并获得张量网络模型后验分布的黑森州矩阵的乘积近似。其次,为了估计平稳模式的参数,我们提出了一个稳定的初始化技巧,以加快推理过程,从而使Tensor网络可以使用梯度下降法更有效且稳定地收敛到平稳路径。我们在MNIST,网上诱骗网站和乳腺癌数据集上验证我们的工作。我们通过可视化二维综合数据集中的模型参数和决策边界,研究贝叶斯张量网络的贝叶斯性质。出于应用目的,
更新日期:2021-01-05
中文翻译:
张量网络的贝叶斯方法
贝叶斯学习是一个强大的学习框架,它以逻辑上一致的方式在推理和预测中结合了数据的外部信息(背景信息)和内部信息(训练数据)。通过贝叶斯规则,将外部信息(先验分布)和内部信息(训练数据似然)相结合,贝叶斯规则获得的后验分布和后验预测(边际)分布总结了推理和预测所需的全部信息, 分别。在本文中,我们从两个角度研究了Tensor网络的贝叶斯框架。首先,我们将先验分布引入张量网络中的权重,并通过后验预测(边际)分布预测新观测值的标签。由于归一化常数计算中参数积分的难处理性,我们通过拉普拉斯逼近近似后验预测分布,并获得张量网络模型后验分布的黑森州矩阵的乘积近似。其次,为了估计平稳模式的参数,我们提出了一个稳定的初始化技巧,以加快推理过程,从而使Tensor网络可以使用梯度下降法更有效且稳定地收敛到平稳路径。我们在MNIST,网上诱骗网站和乳腺癌数据集上验证我们的工作。我们通过可视化二维综合数据集中的模型参数和决策边界,研究贝叶斯张量网络的贝叶斯性质。出于应用目的,