当前位置: X-MOL 学术MIS Quarterly › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Failures of Fairness in Automation Require a Deeper Understanding of Human-ML Augmentation
MIS Quarterly ( IF 7.3 ) Pub Date : 2021-09-01 , DOI: 10.25300/misq/2021/16535
Mike Teodorescu , , Lily Morse , Yazeed Awwad , Gerald Kane , , , ,

Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fair- ness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human–ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human–ML augmentation. We introduce a typology of augmentation for fairness consisting of four quadrants: reactive oversight, proactive oversight, informed reliance, and supervised reliance. We identify significant intersections with previous IS research and distinct managerial approaches to fairness for each quadrant. Several potential research questions emerge from fundamental differences between ML tools trained on data and traditional IS built with code. IS researchers may discover that the differences of ML tools undermine some of the fundamental assumptions upon which classic IS theories and concepts rest. ML may require massive rethinking of significant portions of the corpus of IS research in light of these differences, representing an exciting frontier for research into human–ML augmentation in the years ahead that IS researchers should embrace.

中文翻译:

自动化中的公平失败需要对人类机器学习增强有更深入的了解

机器学习 (ML) 工具降低了执行重复性、耗时任务的成本,但也存在将系统性不公平引入组织流程的风险。在复杂情况下,实现公平的自动化方法通常会失败,这导致一些研究人员认为有必要对 ML 工具进行人工增强。然而,我们目前对人类机器学习增强的理解仍然有限。在本文中,我们认为信息系统 (IS) 学科需要对人类机器学习增强进行更复杂的观点和研究。我们引入了一种由四个象限组成的公平增强类型:被动监督、主动监督、知情依赖和监督依赖。我们确定了与以前的 IS 研究和每个象限的公平管理方法不同的重要交叉点。基于数据训练的 ML 工具与使用代码构建的传统 IS 之间的根本差异产生了几个潜在的研究问题。IS 研究人员可能会发现 ML 工具的差异破坏了经典 IS 理论和概念所依赖的一些基本假设。鉴于这些差异,ML 可能需要对 IS 研究语料库的重要部分进行大规模重新思考,这代表了 IS 研究人员应该接受的未来几年人类 ML 增强研究的令人兴奋的前沿。IS 研究人员可能会发现 ML 工具的差异破坏了经典 IS 理论和概念所依赖的一些基本假设。鉴于这些差异,ML 可能需要对 IS 研究语料库的重要部分进行大规模重新思考,这代表了 IS 研究人员应该接受的未来几年人类 ML 增强研究的令人兴奋的前沿。IS 研究人员可能会发现 ML 工具的差异破坏了经典 IS 理论和概念所依赖的一些基本假设。鉴于这些差异,ML 可能需要对 IS 研究语料库的重要部分进行大规模重新思考,这代表了 IS 研究人员应该接受的未来几年人类 ML 增强研究的令人兴奋的前沿。
更新日期:2021-09-01
down
wechat
bug