当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Normative approach to Attest Digital Discrimination
arXiv - CS - Artificial Intelligence Pub Date : 2020-07-14 , DOI: arxiv-2007.07092 Natalia Criado, Xavier Ferrer, Jose M. Such
arXiv - CS - Artificial Intelligence Pub Date : 2020-07-14 , DOI: arxiv-2007.07092 Natalia Criado, Xavier Ferrer, Jose M. Such
Digital discrimination is a form of discrimination whereby users are
automatically treated unfairly, unethically or just differently based on their
personal data by a machine learning (ML) system. Examples of digital
discrimination include low-income neighbourhood's targeted with high-interest
loans or low credit scores, and women being undervalued by 21% in online
marketing. Recently, different techniques and tools have been proposed to
detect biases that may lead to digital discrimination. These tools often
require technical expertise to be executed and for their results to be
interpreted. To allow non-technical users to benefit from ML, simpler notions
and concepts to represent and reason about digital discrimination are needed.
In this paper, we use norms as an abstraction to represent different situations
that may lead to digital discrimination. In particular, we formalise
non-discrimination norms in the context of ML systems and propose an algorithm
to check whether ML systems violate these norms.
中文翻译:
证明数字歧视的规范方法
数字歧视是一种歧视形式,通过机器学习 (ML) 系统,用户会根据个人数据自动受到不公平、不道德或只是不同的对待。数字歧视的例子包括低收入社区以高息贷款或低信用评分为目标,以及女性在在线营销中被低估了 21%。最近,已经提出了不同的技术和工具来检测可能导致数字歧视的偏见。这些工具通常需要技术专长才能执行并解释其结果。为了让非技术用户从 ML 中受益,需要更简单的概念和概念来表示和推理数字歧视。在本文中,我们使用规范作为抽象来表示可能导致数字歧视的不同情况。特别是,我们在 ML 系统的上下文中形式化了非歧视规范,并提出了一种算法来检查 ML 系统是否违反了这些规范。
更新日期:2020-08-13
中文翻译:
证明数字歧视的规范方法
数字歧视是一种歧视形式,通过机器学习 (ML) 系统,用户会根据个人数据自动受到不公平、不道德或只是不同的对待。数字歧视的例子包括低收入社区以高息贷款或低信用评分为目标,以及女性在在线营销中被低估了 21%。最近,已经提出了不同的技术和工具来检测可能导致数字歧视的偏见。这些工具通常需要技术专长才能执行并解释其结果。为了让非技术用户从 ML 中受益,需要更简单的概念和概念来表示和推理数字歧视。在本文中,我们使用规范作为抽象来表示可能导致数字歧视的不同情况。特别是,我们在 ML 系统的上下文中形式化了非歧视规范,并提出了一种算法来检查 ML 系统是否违反了这些规范。