当前位置: X-MOL 学术Econometrica › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Treatment Assignment in Experiments for Policy Choice
Econometrica ( IF 6.6 ) Pub Date : 2021-01-15 , DOI: 10.3982/ecta17527
Maximilian Kasy 1 , Anja Sautmann 2
Affiliation  

Standard experimental designs are geared toward point estimation and hypothesis testing, while bandit algorithms are geared toward in‐sample outcomes. Here, we instead consider treatment assignment in an experiment with several waves for choosing the best among a set of possible policies (treatments) at the end of the experiment. We propose a computationally tractable assignment algorithm that we call “exploration sampling,” where assignment probabilities in each wave are an increasing concave function of the posterior probabilities that each treatment is optimal. We prove an asymptotic optimality result for this algorithm and demonstrate improvements in welfare in calibrated simulations over both non‐adaptive designs and bandit algorithms. An application to selecting between six different recruitment strategies for an agricultural extension service in India demonstrates practical feasibility.

中文翻译:

实验中的适应性治疗分配,以选择政策

标准实验设计适用于点估计和假设检验,而强盗算法适用于样本内结果。在这里,我们取而代之的是考虑实验中的治疗分配,在实验结束时要进行一系列尝试,以从一组可能的策略(治疗)中选择最佳方案。我们提出了一种计算上容易处理的分配算法,我们将其称为“探索采样”,其中每个波中的分配概率是每个处理最优的后验概率的递增凹函数。我们证明了该算法的渐近最优结果,并证明了非自适应设计和强盗算法在校准仿真中的福利提高。
更新日期:2021-01-16
down
wechat
bug