当前位置: X-MOL 学术arXiv.cs.PL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support
arXiv - CS - Programming Languages Pub Date : 2019-10-29 , DOI: arxiv-1910.13324
Yuan Zhou, Hongseok Yang, Yee Whye Teh and Tom Rainforth

Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich probabilistic models. They further attempt to automate the process of drawing inferences from these models, but doing this successfully is severely hampered by the wide range of non--standard models they can express. As a result, although one can specify complex models in a universal PPS, the provided inference engines often fall far short of what is required. In particular, we show that they produce surprisingly unsatisfactory performance for models where the support varies between executions, often doing no better than importance sampling from the prior. To address this, we introduce a new inference framework: Divide, Conquer, and Combine, which remains efficient for such models, and show how it can be implemented as an automated and generic PPS inference engine. We empirically demonstrate substantial performance improvements over existing approaches on three examples.

中文翻译:

划分、征服和合并:具有随机支持的概率程序的新推理策略

通用概率编程系统 (PPS) 为指定丰富的概率模型提供了强大的框架。他们进一步尝试使从这些模型中得出推理的过程自动化,但成功地做到这一点受到他们可以表达的广泛的非标准模型的严重阻碍。因此,尽管可以在通用 PPS 中指定复杂模型,但提供的推理引擎通常远远达不到要求。特别是,我们表明,对于支持在执行之间变化的模型,它们产生令人惊讶的令人不满意的性能,通常不比从先验的重要性采样好。为了解决这个问题,我们引入了一个新的推理框架:Divide、Conquer 和 Combine,它对此类模型仍然有效,并展示如何将其实现为自动化的通用 PPS 推理引擎。我们在三个例子中凭经验证明了现有方法的显着性能改进。
更新日期:2020-07-17
down
wechat
bug