当前位置: X-MOL 学术Ethics and Information Technology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Non-empirical problems in fair machine learning
Ethics and Information Technology ( IF 3.4 ) Pub Date : 2021-08-05 , DOI: 10.1007/s10676-021-09608-9
Teresa Scantamburlo 1
Affiliation  

The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover and prevent discrimination, additional assessment criteria, and analyses of the interaction between fairness and predictive accuracy. However, the same framework has also suggested higher-order issues regarding the translation of fairness into metrics and quantifiable trade-offs. Although the (empirical) tools which have been developed so far are essential to address discrimination encoded in data and algorithms, their integration into society elicits key (conceptual) questions such as: What kind of assumptions and decisions underlies the empirical framework? How do the results of the empirical approach penetrate public debate? What kind of reflection and deliberation should stakeholders have over available fairness metrics? I will outline the empirical approach to fair machine learning, i.e. how the problem is framed and addressed, and suggest that there are important non-empirical issues that should be tackled. While this work will focus on the problem of algorithmic fairness, the lesson can extend to other conceptual problems in the analysis of algorithmic decision-making such as privacy and explainability.



中文翻译:

公平机器学习中的非经验问题

在过去的几年中,公平机器学习的问题引起了很多关注,并且提供的大部分解决方案原则上都是经验性的。然而,算法公平性也提出了重要的概念问题,如果完全依赖经验考虑,这些问题将无法解决。在此,我将论证当前的辩论已经形成了一个经验框架,该框架为算法决策的发展做出了重要贡献,例如发现和防止歧视的新技术、额外的评估标准以及对公平与否之间相互作用的分析。预测准确性。然而,同样的框架也提出了关于将公平性转化为指标和可量化权衡的高阶问题。尽管迄今为止开发的(经验)工具对于解决数据和算法中编码的歧视至关重要,但它们与社会的整合引发了关键(概念)问题,例如:什么样的假设和决策是经验框架的基础?实证方法的结果如何渗透到公共辩论中?利益相关者应该对可用的公平性指标进行什么样的反思和审议?我将概述公平机器学习的经验方法,即如何构建和解决问题,并建议应该解决一些重要的非经验问题。虽然这项工作将侧重于算法公平性问题,但该课程可以扩展到算法决策分析中的其他概念问题,例如隐私和可解释性。

更新日期:2021-08-09
down
wechat
bug