当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modelling GDPR-Compliant Explanations for Trustworthy AI
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-09 , DOI: arxiv-2109.04165
Francesco Sovrano, Fabio Vitali, Monica Palmirani

Through the General Data Protection Regulation (GDPR), the European Union has set out its vision for Automated Decision- Making (ADM) and AI, which must be reliable and human-centred. In particular we are interested on the Right to Explanation, that requires industry to produce explanations of ADM. The High-Level Expert Group on Artificial Intelligence (AI-HLEG), set up to support the implementation of this vision, has produced guidelines discussing the types of explanations that are appropriate for user-centred (interactive) Explanatory Tools. In this paper we propose our version of Explanatory Narratives (EN), based on user-centred concepts drawn from ISO 9241, as a model for user-centred explanations aligned with the GDPR and the AI-HLEG guidelines. Through the use of ENs we convert the problem of generating explanations for ADM into the identification of an appropriate path over an Explanatory Space, allowing explainees to interactively explore it and produce the explanation best suited to their needs. To this end we list suitable exploration heuristics, we study the properties and structure of explanations, and discuss the proposed model identifying its weaknesses and strengths.

中文翻译:

为可信赖的 AI 建模符合 GDPR 的解释

通过通用数据保护条例 (GDPR),欧盟制定了自动化决策 (ADM) 和人工智能的愿景,这些愿景必须可靠且以人为本。我们尤其对解释权感兴趣,它要求行业对 ADM 做出解释。为支持这一愿景的实施而成立的人工智能高级专家组 (AI-HLEG) 制定了指南,讨论适用于以用户为中心的(交互式)解释工具的解释类型。在本文中,我们提出了我们的解释性叙述 (EN) 版本,该版本基于从 ISO 9241 中提取的以用户为中心的概念,作为与 GDPR 和 AI-HLEG 指南保持一致的以用户为中心的解释模型。通过使用 EN,我们将生成 ADM 解释的问题转化为对解释空间上适当路径的识别,允许被解释者以交互方式探索它并产生最适合他们需求的解释。为此,我们列出了合适的探索启发式方法,我们研究了解释的属性和结构,并讨论了提出的模型,以确定其弱点和优势。
更新日期:2021-09-10
down
wechat
bug