当前位置: X-MOL 学术Commun. Stat. Simul. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Model-based recursive partitioning algorithm to penalized non-crossing multiple quantile regression for the right-censored data
Communications in Statistics - Simulation and Computation ( IF 0.8 ) Pub Date : 2021-07-08 , DOI: 10.1080/03610918.2021.1944643
Jaeoh Kim 1 , Byoungwook Jang 1 , Sungwan Bang 2
Affiliation  

Abstract

Quantile functions of the response variable provide a tool for practitioners to analyze both the central tendency and statistical dispersion of data. As a counterpart to the regression tree models, quantile regression tree methods (QRT) gained interest in constructing tree models for quantile functions. Previous QRT methods, however, estimate different tree models for each quantile level as they separately estimate QRT models. To The unified non-crossing multiple quantile regression tree (UNQRT) model was proposed to construct a common tree structure by aggregating information across all quantile levels. UNQRT, however, does not benefit from automatic variable selection techniques developed in regression literature. We propose a penalized UNQRT (P-UNQRT) method by incorporating adaptive sup-norm penalty into the original UNQRT model to perform variable selection. Additionally, we extend P-UNQRT to cope with the right-censored data that often arise in healthcare applications. The Kaplan-Meier estimator is used as weights for each observation of the censored data in our proposed model. We demonstrate the benefits of our algorithms through empirical studies and analyze the military training data from Korea Combat Training Center to study the major factors that contribute to successfully completing military operations.



中文翻译:

基于模型的递归划分算法对右删失数据进行惩罚非交叉多分位数回归

摘要

响应变量的分位数函数为从业者提供了分析数据集中趋势和统计离散度的工具。作为回归树模型的对应物,分位数回归树方法 (QRT) 在为分位数函数构建树模型方面引起了人们的兴趣。然而,以前的 QRT 方法为每个分位数水平估计不同的树模型,因为它们分别估计 QRT 模型。提出了统一的非交叉多分位数回归树(UNQRT)模型,通过聚合所有分位数级别的信息来构建公共树结构。然而,UNQRT 并没有受益于回归文献中开发的自动变量选择技术。我们提出了一种惩罚性 UNQRT(P-UNQRT)方法,通过将自适应超范数惩罚合并到原始 UNQRT 模型中来执行变量选择。此外,我们扩展了 P-UNQRT 以应对医疗保健应用中经常出现的右删失数据。Kaplan-Meier 估计量用作我们提出的模型中审查数据的每个观察的权重。我们通过实证研究展示了我们的算法的优势,并分析了韩国战斗训练中心的军事训练数据,以研究有助于成功完成军事行动的主要因素。

更新日期:2021-07-08
down
wechat
bug