当前位置: X-MOL 学术Am. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Comparative Tutorial of Bayesian Sequential Design and Reinforcement Learning
The American Statistician ( IF 1.8 ) Pub Date : 2022-10-31 , DOI: 10.1080/00031305.2022.2129787
Mauricio Tec 1 , Yunshan Duan 2 , Peter Müller 2
Affiliation  

Abstract

Reinforcement learning (RL) is a computational approach to reward-driven learning in sequential decision problems. It implements the discovery of optimal actions by learning from an agent interacting with an environment rather than from supervised data. We contrast and compare RL with traditional sequential design, focusing on simulation-based Bayesian sequential design (BSD). Recently, there has been an increasing interest in RL techniques for healthcare applications. We introduce two related applications as motivating examples. In both applications, the sequential nature of the decisions is restricted to sequential stopping. Rather than a comprehensive survey, the focus of the discussion is on solutions using standard tools for these two relatively simple sequential stopping problems. Both problems are inspired by adaptive clinical trial design. We use examples to explain the terminology and mathematical background that underlie each framework and map one to the other. The implementations and results illustrate the many similarities between RL and BSD. The results motivate the discussion of the potential strengths and limitations of each approach.



中文翻译:

贝叶斯时序设计与强化学习的比较教程

摘要

强化学习 (RL) 是一种在顺序决策问题中奖励驱动学习的计算方法。它通过从与环境交互的代理而不是从监督数据中学习来实现最佳动作的发现。我们将 RL 与传统时序设计进行对比和比较,重点关注基于仿真的贝叶斯时序设计 (BSD)。最近,人们对用于医疗保健应用的 RL 技术越来越感兴趣。我们介绍了两个相关的应用程序作为激励示例。在这两个应用程序中,决策的顺序性质仅限于顺序停止。讨论的重点不是全面调查,而是针对这两个相对简单的顺序停止问题使用标准工具的解决方案。这两个问题都受到适应性临床试验设计的启发。我们使用示例来解释每个框架背后的术语和数学背景,并将一个框架映射到另一个框架。实现和结果说明了 RL 和 BSD 之间的许多相似之处。结果激发了对每种方法的潜在优势和局限性的讨论。

更新日期:2022-10-31
down
wechat
bug