当前位置: X-MOL 学术SIAM J. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Algorithmic Stability for Adaptive Data Analysis
SIAM Journal on Computing ( IF 1.2 ) Pub Date : 2021-04-20 , DOI: 10.1137/16m1103646
Raef Bassily , Kobbi Nissim , Adam Smith , Thomas Steinke , Uri Stemmer , Jonathan Ullman

SIAM Journal on Computing, Ahead of Print.
Adaptivity is an important feature of data analysis---the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. [Proceedings of STOC, ACM, 2015, pp.117--126] and Hardt and Ullman [Proceedings of FOCS, IEEE, 2014, pp. 454--463] initiated the formal study of this problem and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution ${P}$ and a set of $n$ independent samples ${x}$ is drawn from ${P}$. We seek an algorithm that, given ${x}$ as input, accurately answers a sequence of adaptively chosen “queries” about the unknown distribution ${P}$. How many samples $n$ must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions toward resolving this question: 1. We give upper bounds on the number of samples $n$ that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. and have been applied in subsequent work by those authors [Science, 349 (2015), pp. 636--638; Proceedings of NIPS, 2015, pp. 2350--2358]. 2. We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries). As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that stable algorithms of the kind guaranteed by differential privacy imply low generalization error. We also show that weaker stability guarantees such as bounded Kullback--Leibler divergence and total variation distance lead to correspondingly weaker generalization guarantees.


中文翻译:

自适应数据分析的算法稳定性

SIAM 计算杂志,超前印刷。
适应性是数据分析的一个重要特征——选择关于数据集的问题通常取决于之前与同一数据集的交互。但是,通常在非自适应模型中研究统计有效性,其中在绘制数据集之前指定所有问题。Dwork 等人最近的工作。[Proceedings of STOC, ACM, 2015, pp.117--126] 和 Hardt and Ullman [Proceedings of FOCS, IEEE, 2014, pp. 454--463] 发起了对这个问题的正式研究,并给出了第一个上下自适应数据分析可实现的泛化误差的界限。具体来说,假设存在一个未知分布 ${P}$,并且从 ${P}$ 中抽取了一组 $n$ 独立样本 ${x}$。我们寻求一种算法,给定 ${x}$ 作为输入,准确地回答关于未知分布 ${P}$ 的一系列自适应选择的“查询”。我们必须从分布中抽取多少个样本 $n$,作为查询类型、查询数量和所需准确度的函数?在这项工作中,我们为解决这个问题做出了两个新贡献: 1. 我们给出了回答统计查询所需的样本数 $n$ 的上限。边界改进并简化了 Dwork 等人的工作。并已被这些作者应用于后续工作 [Science, 349 (2015), pp. 636--638; NIPS 论文集,2015 年,第 2350--2358 页]。2. 我们证明了回答更一般的查询系列所需的样本数量的第一个上限。这些包括任意低敏感性查询和一类重要的优化查询(或者,风险最小化查询)。与 Dwork 等人一样,我们的算法基于以差分隐私形式与算法稳定性的联系。我们通过提供定量最优、更一般和更简单的主要定理证明来扩展他们的工作,即差分隐私保证的稳定算法意味着低泛化误差。我们还表明,较弱的稳定性保证(例如有界 Kullback--Leibler 散度和总变异距离)导致相应较弱的泛化保证。并更简单地证明了他们的主要定理,即由差分隐私保证的稳定算法意味着低泛化误差。我们还表明,较弱的稳定性保证(例如有界 Kullback--Leibler 散度和总变异距离)导致相应较弱的泛化保证。并更简单地证明了他们的主要定理,即由差分隐私保证的稳定算法意味着低泛化误差。我们还表明,较弱的稳定性保证(例如有界 Kullback--Leibler 散度和总变异距离)导致相应较弱的泛化保证。
更新日期:2021-06-01
down
wechat
bug