当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Posterior concentration and fast convergence rates for generalized Bayesian learning
Information Sciences Pub Date : 2020-06-12 , DOI: 10.1016/j.ins.2020.05.138
Lam Si Tung Ho , Binh T. Nguyen , Vu Dinh , Duy Nguyen

In this paper, we study the learning rate of generalized Bayes estimators in a general setting where the hypothesis class can be uncountable and have an irregular shape, the loss function can have heavy tails, and the optimal hypothesis may not be unique. We prove that under the multi-scale Bernstein’s condition, the generalized posterior distribution concentrates around the set of optimal hypotheses and the generalized Bayes estimator can achieve fast learning rate. Our results are applied to show that the standard Bayesian linear regression is robust to heavy-tailed distributions.



中文翻译:

后验集中和快速收敛速度的广义贝叶斯学习

在本文中,我们研究了一般环境下的广义贝叶斯估计量的学习率,在这种情况下,假设类别可能是不可数的并且具有不规则形状,损失函数可能会有沉重的尾巴,并且最佳假设可能不是唯一的。我们证明,在多尺度伯恩斯坦条件下,广义后验分布集中在最优假设的集合上,广义贝叶斯估计量可以实现快速学习。我们的结果用于表明标准贝叶斯线性回归对于重尾分布具有鲁棒性。

更新日期:2020-06-12
down
wechat
bug