当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Prune Responsibly
arXiv - CS - Computers and Society Pub Date : 2020-09-10 , DOI: arxiv-2009.09936
Michela Paganini

Irrespective of the specific definition of fairness in a machine learning application, pruning the underlying model affects it. We investigate and document the emergence and exacerbation of undesirable per-class performance imbalances, across tasks and architectures, for almost one million categories considered across over 100K image classification models that undergo a pruning process.We demonstrate the need for transparent reporting, inclusive of bias, fairness, and inclusion metrics, in real-life engineering decision-making around neural network pruning. In response to the calls for quantitative evaluation of AI models to be population-aware, we present neural network pruning as a tangible application domain where the ways in which accuracy-efficiency trade-offs disproportionately affect underrepresented or outlier groups have historically been overlooked. We provide a simple, Pareto-based framework to insert fairness considerations into value-based operating point selection processes, and to re-evaluate pruning technique choices.

中文翻译:

负责任地修剪

无论机器学习应用程序中公平的具体定义如何,修剪底层模型都会对其产生影响。我们调查并记录了跨任务和架构的不受欢迎的每类性能不平衡的出现和加剧,在经过修剪过程的 10 万多个图像分类模型中考虑的近 100 万个类别。我们证明了透明报告的必要性,包括偏见、公平性和包容性指标,在围绕神经网络修剪的现实工程决策中。为了响应对 AI 模型进行定量评估以具有人口意识的呼吁,我们将神经网络剪枝作为一个有形的应用领域,其中准确性 - 效率权衡不成比例地影响代表性不足或异常组的方式历来被忽视。我们提供了一个简单的、基于帕累托的框架,将公平性考虑因素插入到基于价值的操作点选择过程中,并重新评估剪枝技术选择。
更新日期:2020-09-22
down
wechat
bug