当前位置: X-MOL 学术Internet Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Managing the tension between opposing effects of explainability of artificial intelligence: a contingency theory perspective
Internet Research ( IF 5.9 ) Pub Date : 2021-07-05 , DOI: 10.1108/intr-05-2020-0300
Babak Abedin 1
Affiliation  

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.



中文翻译:

管理人工智能可解释性的对立效应之间的紧张关系:权变理论视角

目的

对数据分析和人工智能 (AI) 系统的可解释性和可解释性的研究正在兴起。然而,最近的研究要么仅仅宣传可解释性的好处,要么因其适得其反的效果而批评它。这项研究解决了这个两极分化的空间,旨在确定人工智能可解释性的相反影响以及它们之间的紧张关系,并提出如何管理这种紧张关系以优化人工智能系统的性能和可信度。

设计/方法/方法

作者系统地回顾了文献并使用权变理论镜头对其进行综合,以开发一个框架来管理 AI 可解释性的相反影响。

发现

作者发现了可解释性的五个相反效果:可理解性、行为、机密性、完整性和对 AI 的信心(5C)。作者还提出了管理 5C 之间紧张关系的六个观点:解释中的实用主义、解释的语境化、人类代理和人工智能代理的共存、度量和标准化、监管和伦理原则以及其他新兴解决方案(即人工智能包络、区块链和 AI 模糊系统)。

研究限制/影响

与其他系统性文献综述研究一样,结果受到所选论文内容的限制。

实际影响

调查结果表明,人工智能所有者和开发人员如何通过可见性、问责制和维护人工智能的“社会利益”来管理盈利能力、预测准确性和系统性能之间的紧张关系。结果指导从业者制定人工智能可解释性的指标和标准,以人工智能操作的背景为重点。

原创性/价值

这项研究解决了学者和从业者关于 AI 可解释性的好处与其适得其反的效果的两极分化的信念。它表明没有单一的最佳方法可以最大限度地提高 AI 的可解释性。相反,必须管理启用和约束效果的共存。

更新日期:2021-07-05
down
wechat
bug