当前位置: X-MOL 学术J. Am. Stat. Assoc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Personalized Policy Learning using Longitudinal Mobile Health Data
Journal of the American Statistical Association ( IF 3.7 ) Pub Date : 2020-08-11 , DOI: 10.1080/01621459.2020.1785476
Xinyu Hu 1 , Min Qian 1 , Bin Cheng 1 , Ying Kuen Cheung 1
Affiliation  

We address the personalized policy learning problem using longitudinal mobile health application usage data. Personalized policy represents a paradigm shift from developing a single policy that may prescribe personalized decisions by tailoring. Specifically, we aim to develop the best policy, one per user, based on estimating random effects under generalized linear mixed model. With many random effects, we consider new estimation method and penalized objective to circumvent high-dimension integrals for marginal likelihood approximation. We establish consistency and optimality of our method with endogenous app usage. We apply our method to develop personalized push ("prompt") schedules in 294 app users, with a goal to maximize the prompt response rate given past app usage and other contextual factors. We found the best push schedule given the same covariates varied among the users, thus calling for personalized policies. Using the estimated personalized policies would have achieved a mean prompt response rate of 23% in these users at 16 weeks or later: this is a remarkable improvement on the observed rate (11%), while the literature suggests 3%-15% user engagement at 3 months after download. The proposed method compares favorably to existing estimation methods including using the R function "glmer" in a simulation study.

中文翻译:

使用纵向移动健康数据的个性化政策学习

我们使用纵向移动健康应用程序使用数据解决个性化政策学习问题。个性化政策代表了从制定单一政策的范式转变,该政策可以通过剪裁来规定个性化决策。具体来说,我们的目标是基于在广义线性混合模型下估计随机效应,制定最佳策略,每个用户一个。由于许多随机效应,我们考虑了新的估计方法和惩罚目标来规避边际似然近似的高维积分。我们通过内生应用程序使用来建立我们方法的一致性和最优性。我们应用我们的方法在 294 个应用程序用户中开发个性化推送(“提示”)时间表,目标是在给定过去应用程序使用情况和其他上下文因素的情况下最大化即时响应率。我们找到了最佳推送时间表,因为相同的协变量在用户之间有所不同,因此需要个性化的策略。使用估计的个性化策略将在 16 周或更晚时在这些用户中实现 23% 的平均即时响应率:这是观察率 (11%) 的显着改进,而文献表明用户参与度为 3%-15%下载后3个月。所提出的方法优于现有的估计方法,包括在模拟研究中使用 R 函数“glmer”。而文献表明,在下载后 3 个月,用户参与度为 3%-15%。所提出的方法优于现有的估计方法,包括在模拟研究中使用 R 函数“glmer”。而文献表明,在下载后 3 个月,用户参与度为 3%-15%。所提出的方法优于现有的估计方法,包括在模拟研究中使用 R 函数“glmer”。
更新日期:2020-08-11
down
wechat
bug