当前位置: X-MOL 学术Comput. Law Secur. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The dual function of explanations: Why it is useful to compute explanations
Computer Law & Security Review ( IF 2.707 ) Pub Date : 2021-03-18 , DOI: 10.1016/j.clsr.2020.105527
Niko Tsakalakis , Sophie Stalla-Bourdillon , Laura Carmichael , Trung Dong Huynh , Luc Moreau , Ayah Helal

Whilst the legal debate concerning automated decision-making has been focused mainly on whether a ‘right to explanation’ exists in the GDPR, the emergence of ‘explainable Artificial Intelligence’ (XAI) has produced taxonomies for the explanation of Artificial Intelligence (AI) systems. However, various researchers have warned that transparency of the algorithmic processes in itself is not enough. Better and easier tools for the assessment and review of the socio-technical systems that incorporate automated decision-making are needed. The PLEAD project suggests that, aside from fulfilling the obligations set forth by Article 22 of the GDPR, explanations can also assist towards a holistic compliance strategy if used as detective controls. PLEAD aims to show that computable explanations can facilitate monitoring and auditing, and make compliance more systematic. Automated computable explanations can be key controls in fulfilling accountability and data-protection-by-design obligations, able to empower both controllers and data subjects. This opinion piece presents the work undertaken by the PLEAD project towards facilitating the generation of computable explanations. PLEAD leverages provenance-based technology to compute explanations as external detective controls to the benefit of data subjects and as internal detective controls to the benefit of the data controller.



中文翻译:

解释的双重功能:为什么计算解释有用

尽管有关自动决策的法律辩论主要集中在GDPR是否存在“解释权”,但“可解释的人工智能”(XAI)的出现产生了用于解释人工智能(AI)系统的分类法。但是,许多研究人员警告说,算法过程本身的透明性还不够。需要更好,更轻松的工具来评估和审查结合了自动决策的社会技术系统。PLEAD项目建议,除了履行GDPR第22条规定的义务之外,如果将其用作侦查性控制措施,则说明也可以有助于制定整体合规策略。PLEAD旨在表明可计算的解释可以促进监控和审计,并使合规性更加系统化。自动化的可计算解释可能是履行问责制和按设计进行数据保护义务的关键控制,能够赋予控制者和数据主体以权力。本意见介绍了PLEAD项目为促进可计算的解释的产生所做的工作。PLEAD利用基于出处的技术将解释作为外部侦探控件从数据主体的利益中计算出来,作为内部侦探控件从数据控制器的利益中计算出来。本意见介绍了PLEAD项目为促进可计算的解释的产生所做的工作。PLEAD利用基于出处的技术将解释作为外部侦探控件从数据主体的利益中计算出来,作为内部侦探控件从数据控制器的利益中计算出来。本意见介绍了PLEAD项目为促进可计算的解释的产生所做的工作。PLEAD利用基于出处的技术将解释作为外部侦探控件从数据主体的利益中计算出来,作为内部侦探控件从数据控制器的利益中计算出来。

更新日期:2021-03-18
down
wechat
bug