当前位置: X-MOL 学术J. Am. Stat. Assoc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PUlasso: High-dimensional variable selection with presence-only data
Journal of the American Statistical Association ( IF 3.0 ) Pub Date : 2019-04-11 , DOI: 10.1080/01621459.2018.1546587
Hyebin Song 1 , Garvesh Raskutti 1
Affiliation  

Abstract In various real-world problems, we are presented with classification problems with positive and unlabeled data, referred to as presence-only responses. In this article we study variable selection in the context of presence only responses where the number of features or covariates p is large. The combination of presence-only responses and high dimensionality presents both statistical and computational challenges. In this article, we develop the PUlasso algorithm for variable selection and classification with positive and unlabeled responses. Our algorithm involves using the majorization-minimization framework which is a generalization of the well-known expectation-maximization (EM) algorithm. In particular to make our algorithm scalable, we provide two computational speed-ups to the standard EM algorithm. We provide a theoretical guarantee where we first show that our algorithm converges to a stationary point, and then prove that any stationary point within a local neighborhood of the true parameter achieves the minimax optimal mean-squared error under both strict sparsity and group sparsity assumptions. We also demonstrate through simulations that our algorithm outperforms state-of-the-art algorithms in the moderate p settings in terms of classification performance. Finally, we demonstrate that our PUlasso algorithm performs well on a biochemistry example. Supplementary materials for this article are available online.

中文翻译:

PUlasso:仅存在数据的高维变量选择

摘要 在各种现实世界的问题中,我们面临着带有正面和未标记数据的分类问题,称为仅存在响应。在本文中,我们研究仅存在响应的背景下的变量选择,其中特征或协变量 p 的数量很大。仅存在响应和高维度的结合提出了统计和计算挑战。在本文中,我们开发了 PUlasso 算法,用于具有积极和未标记响应的变量选择和分类。我们的算法涉及使用主要最小化框架,它是众所周知的期望最大化(EM)算法的概括。特别是为了使我们的算法具有可扩展性,我们为标准 EM 算法提供了两种计算加速。我们提供了一个理论保证,首先证明我们的算法收敛到一个驻点,然后证明真实参数的局部邻域内的任何驻点在严格稀疏性和群稀疏性假设下都实现了极小极大最优均方误差。我们还通过模拟证明,在分类性能方面,我们的算法在中等 p 设置下优于最先进的算法。最后,我们证明我们的 PUlasso 算法在生物化学示例中表现良好。本文的补充材料可在线获取。
更新日期:2019-04-11
down
wechat
bug