当前位置: X-MOL 学术IEEE Softw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Diversity Crisis of Software Engineering for Artificial Intelligence
IEEE Software ( IF 3.3 ) Pub Date : 2020-09-01 , DOI: 10.1109/ms.2020.2975075
Bram Adams 1 , Foutse Khomh 2
Affiliation  

Artificial Intelligence (AI) is experiencing a "diversity crisis."1 Several reports1-3 have shown how the breakthrough of modern AI has not yet been able to improve on existing diversity challenges regarding gender, race, geography, and other factors, neither for the end users of those products nor the companies and organizations building them. Plenty of examples have surfaced in which biased data engineering practices or existing data sets led to incorrect, painful, or sometimes even harmful consequences for unassuming end users.4 The problem is that ruling out such biases is not straightforward due to the sheer number of different bias types.5 To have a chance to eliminate as many biases as possible, most of the experts agree that the teams and organizations building AI products should be made more diverse.1-3 This harkens back to Linus' law6 for open source development ("given enough eyeballs, all bugs are shallow") but applied to the development process of AI products.

中文翻译:

人工智能软件工程的多样性危机

人工智能 (AI) 正在经历一场“多样性危机”。 1 几份报告 1-3 表明,现代人工智能的突破尚未能够改善现有的性别、种族、地理和其他因素的多样性挑战,无论是对于这些产品的最终用户或构建它们的公司和组织。大量的例子已经浮出水面,其中有偏见的数据工程实践或现有的数据集会导致不装腔作势的最终用户不正确、痛苦,有时甚至是有害的后果。 4 问题是,由于不同类型的数据的绝对数量,排除这种偏见并不简单。偏见类型。5 为了有机会消除尽可能多的偏见,大多数专家都认为构建 AI 产品的团队和组织应该更加多样化。
更新日期:2020-09-01
down
wechat
bug