当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bridging the Gap Between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making
IEEE Computational Intelligence Magazine ( IF 9 ) Pub Date : 2022-01-12 , DOI: 10.1109/mci.2021.3129960
Ronan Hamon , Henrik Junklewitz , Ignacio Sanchez , Gianclaudio Malgieri , Paul De Hert

Can satisfactory explanations for complex machine learning models be achieved in high-risk automated decision-making? How can such explanations be integrated into a data protection framework safeguarding a right to explanation? This article explores from an interdisciplinary point of view the connection between existing legal requirements for the explainability of AI systems set out in the General Data Protection Regulation (GDPR) and the current state of the art in the field of explainable AI. It studies the challenges of providing human legible explanations for current and future AI-based decision-making systems in practice, based on two scenarios of automated decision-making in credit scoring risks and medical diagnosis of COVID-19. These scenarios exemplify the trend towards increasingly complex machine learning algorithms in automated decision-making, both in terms of data and models. Current machine learning techniques, in particular those based on deep learning, are unable to make clear causal links between input data and final decisions. This represents a limitation for providing exact, human-legible reasons behind specific decisions, and presents a serious challenge to the provision of satisfactory, fair and transparent explanations. Therefore, the conclusion is that the quality of explanations might not be considered as an adequate safeguard for automated decision-making processes under the GDPR. Accordingly, additional tools should be considered to complement explanations. These could include algorithmic impact assessments, other forms of algorithmic justifications based on broader AI principles, and new technical developments in trustworthy AI. This suggests that eventually all of these approaches would need to be considered as a whole.

中文翻译:

在 GDPR 中弥合 AI 与可解释性之间的差距:迈向自动化决策中的设计可信度

在高风险的自动化决策中能否对复杂的机器学习模型做出令人满意的解释?如何将此类解释整合到保障解释权的数据保护框架中?本文从跨学科的角度探讨了通用数据保护条例 (GDPR) 中关于人工智能系统可解释性的现有法律要求与可解释人工智能领域的当前技术水平之间的联系。它基于信用评分风险和 COVID-19 医学诊断中的自动决策两个场景,研究了在实践中为当前和未来基于人工智能的决策系统提供人类清晰解释的挑战。这些场景体现了自动化决策中机器学习算法越来越复杂的趋势,无论是在数据还是模型方面。当前的机器学习技术,特别是基于深度学习的技术,无法在输入数据和最终决策之间建立明确的因果关系。这代表了在具体决定背后提供准确、易读的理由的局限性,并对提供令人满意、公平和透明的解释提出了严峻挑战。因此,结论是解释的质量可能不能被视为 GDPR 下自动化决策过程的充分保障。因此,应考虑使用其他工具来补充解释。这些可能包括算法影响评估,基于更广泛的人工智能原理的其他形式的算法论证,以及可信赖人工智能的新技术发展。这表明最终所有这些方法都需要作为一个整体来考虑。
更新日期:2022-01-14
down
wechat
bug