当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributionally Robust Bayesian Quadrature Optimization
arXiv - CS - Machine Learning Pub Date : 2020-01-19 , DOI: arxiv-2001.06814
Thanh Tang Nguyen, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh

Bayesian quadrature optimization (BQO) maximizes the expectation of an expensive black-box integrand taken over a known probability distribution. In this work, we study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples. A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set. Though Monte Carlo estimate is unbiased, it has high variance given a small set of samples; thus can result in a spurious objective function. We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution. In particular, we propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose. We demonstrate the empirical effectiveness of our proposed framework in synthetic and real-world problems, and characterize its theoretical convergence via Bayesian regret.

中文翻译:

分布鲁棒贝叶斯正交优化

贝叶斯正交优化 (BQO) 最大化了对已知概率分布采取的昂贵黑盒被积函数的期望。在这项工作中,我们研究了分布不确定性下的 BQO,其中除了有限的一组 iid 样本外,潜在的概率分布是未知的。给定固定样本集,标准 BQO 方法最大化真实预期目标的蒙特卡罗估计。尽管 Monte Carlo 估计是无偏的,但在给定一小部分样本的情况下,它具有很高的方差;因此可能会导致虚假的目标函数。我们通过在最具对抗性的分布下最大化预期目标,对这个问题采用分布稳健的优化观点。特别是,我们提出了一种新的基于后验采样的算法,即为此目的分布稳健的 BQO (DRBQO)。我们证明了我们提出的框架在合成和现实世界问题中的实证有效性,并通过贝叶斯遗憾来表征其理论收敛性。
更新日期:2020-01-22
down
wechat
bug