当前位置: X-MOL 学术Stat. Anal. Data Min. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The fairness-accuracy Pareto front
Statistical Analysis and Data Mining ( IF 2.1 ) Pub Date : 2021-10-27 , DOI: 10.1002/sam.11560
Susan Wei 1 , Marc Niethammer 2
Affiliation  

Algorithmic fairness seeks to identify and correct sources of bias in machine learning algorithms. Confoundingly, ensuring fairness often comes at the cost of accuracy. We provide formal tools in this work for reconciling this fundamental tension in algorithm fairness. Specifically, we put to use the concept of Pareto optimality from multiobjective optimization and seek the fairness-accuracy Pareto front of a neural network classifier. We demonstrate that many existing algorithmic fairness methods are performing the so-called linear scalarization scheme, which has severe limitations in recovering Pareto optimal solutions. We instead apply the Chebyshev scalarization scheme which is provably superior theoretically and no more computationally burdensome at recovering Pareto optimal solutions compared to the linear scheme.

中文翻译:

公平准确的帕累托前沿

算法公平性旨在识别和纠正机器学习算法中的偏见来源。令人困惑的是,确保公平往往是以准确性为代价的。我们在这项工作中提供了正式的工具来调和算法公平中的这种基本张力。具体来说,我们利用多目标优化中的帕累托最优概念,寻求神经网络分类器的公平-准确帕累托前沿。我们证明了许多现有的算法公平方法正在执行所谓的线性标量化方案,该方案在恢复帕累托最优解方面具有严重的局限性。相反,我们应用了切比雪夫标量化方案,该方案在理论上证明是优越的,并且与线性方案相比,在恢复帕累托最优解方面没有更多的计算负担。
更新日期:2021-10-27
down
wechat
bug