当前位置: X-MOL 学术IEEE Technol. Soc. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bias and Discrimination in AI: A Cross-Disciplinary Perspective
IEEE Technology and Society Magazine ( IF 2.1 ) Pub Date : 2021-06-03 , DOI: 10.1109/mts.2021.3056293
Xavier Ferrer , Tom van Nuenen , Jose M. Such , Mark Cote , Natalia Criado

Operating at a large scale and impacting large groups of people, automated systems can make consequential and sometimes contestable decisions. Automated decisions can impact a range of phenomena, from credit scores to insurance payouts to health evaluations. These forms of automation can become problematic when they place certain groups or people at a systematic disadvantage. These are cases of discrimination—which is legally defined as the unfair or unequal treatment of an individual (or group) based on certain protected characteristics (also known as protected attributes) such as income, education, gender, or ethnicity. When the unfair treatment is caused by automated decisions, usually taken by intelligent agents or other AI-based systems, the topic of digital discrimination arises. Digital discrimination is prevalent in a diverse range of fields, such as in risk assessment systems for policing and credit scores [1] , [2] .

中文翻译:


人工智能中的偏见和歧视:跨学科视角



自动化系统大规模运行并影响大量人群,可以做出重要的、有时甚至是有争议的决策。自动化决策可以影响一系列现象,从信用评分到保险支出再到健康评估。当这些形式的自动化将某些群体或个人置于系统性劣势时,它们可能会出现问题。这些都是歧视案例,法律上将歧视定义为基于某些受保护特征(也称为受保护属性)(例如收入、教育、性别或种族)对个人(或群体)进行不公平或不平等待遇。当不公平待遇是由自动决策(通常由智能代理或其他基于人工智能的系统做出)引起时,数字歧视的话题就出现了。数字歧视普遍存在于各个领域,例如警务和信用评分的风险评估系统 [1] , [2] 。
更新日期:2021-06-03
down
wechat
bug