当前位置: X-MOL 学术arXiv.cs.DS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean Estimation
arXiv - CS - Data Structures and Algorithms Pub Date : 2021-06-16 , DOI: arxiv-2106.08537
Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian

We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset. Specifically, we are given a set $T$ of $n$ points in $\mathbb{R}^d$ and a parameter $0< \alpha <\frac 1 2$ such that an $\alpha$-fraction of the points in $T$ are i.i.d. samples from a well-behaved distribution $\mathcal{D}$ and the remaining $(1-\alpha)$-fraction of the points are arbitrary. The goal is to output a small list of vectors at least one of which is close to the mean of $\mathcal{D}$. As our main contribution, we develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees, with running time $n^{1 + o(1)} d$. All prior algorithms for this problem had additional polynomial factors in $\frac 1 \alpha$. As a corollary, we obtain the first almost-linear time algorithms for clustering mixtures of $k$ separated well-behaved distributions, nearly-matching the statistical guarantees of spectral methods. Prior clustering algorithms inherently relied on an application of $k$-PCA, thereby incurring runtimes of $\Omega(n d k)$. This marks the first runtime improvement for this basic statistical problem in nearly two decades. The starting point of our approach is a novel and simpler near-linear time robust mean estimation algorithm in the $\alpha \to 1$ regime, based on a one-shot matrix multiplicative weights-inspired potential decrease. We crucially leverage this new algorithmic framework in the context of the iterative multi-filtering technique of Diakonikolas et. al. '18, '20, providing a method to simultaneously cluster and downsample points using one-dimensional projections --- thus, bypassing the $k$-PCA subroutines required by prior algorithms.

中文翻译:

通过列表可解码均值估计在几乎线性时间内聚类混合模型

我们研究了列表可解码均值估计的问题,其中对手可以破坏大部分数据集。具体来说,我们给定了 $\mathbb{R}^d$ 中 $n$ 个点的集合 $T$ 和参数 $0< \alpha <\frac 1 2$ 使得 $\alpha$-fraction of the points在 $T$ 中是来自良好分布 $\mathcal{D}$ 的 iid 样本,其余的 $(1-\alpha)$-分数是任意的。目标是输出一个小的向量列表,其中至少一个接近 $\mathcal{D}$ 的平均值。作为我们的主要贡献,我们开发了用于列表可解码均值估计的新算法,实现了近乎最优的统计保证,运行时间为 $n^{1 + o(1)} d$。此问题的所有先前算法在 $\frac 1 \alpha$ 中都有额外的多项式因子。作为推论,我们获得了第一个几乎线性的时间算法,用于聚类 $k$ 分离的良好分布的混合物,几乎匹配光谱方法的统计保证。先前的聚类算法本质上依赖于 $k$-PCA 的应用,从而导致 $\Omega(ndk)$ 的运行时间。这标志着近二十年来这个基本统计问题的第一次运行时改进。我们的方法的出发点是一种新颖且更简单的近线性时间稳健均值估计算法,该算法在 $\alpha \to 1$ 范围内,基于一次性矩阵乘法权重启发的潜在减少。我们在 Diakonikolas 等人的迭代多重过滤技术的背景下至关重要地利用了这种新的算法框架。阿尔。'18, '20,
更新日期:2021-06-17
down
wechat
bug