当前位置: X-MOL 学术Int. Data Priv. Law › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Governing machine-learning models: challenging the personal data presumption
International Data Privacy Law ( IF 2.500 ) Pub Date : 2020-08-03 , DOI: 10.1093/idpl/ipaa009
M R Leiser , Francien Dechesne 1
Affiliation  

Key Points
  • This article confronts assertions made by Dr Michael Veale, Dr Reuben Binns, and Professor Lilian Edwards in ‘Algorithms that remember: Model Inversion Attacks and Data Protection Law’, as well as the general trend by the courts to broaden the definition of ‘personal data’ under Article 4(1) GDPR to include ‘everything data-related’.
  • Veale and others use examples from computer science to suggest some models, subject to certain attacks, reveal personal data. Accordingly, Veale and others argue that data subject rights could be exercised against the model itself.
  • A computer science perspective, as well as case law from the Court of Justice of the European Union, is used to argue that effective machine-learning model governance can be achieved without widening the scope of personal data and that the governance of machine-learning models is better achieved through already existing provisions of data protection and other areas of law.
  • Extending the scope of personal data to machine-learning models would render the protections granted to intelligent endeavours within the black box ineffectual.


中文翻译:

治理机器学习模型:挑战个人数据推定

关键点
  • 本文遇到了迈克尔·韦尔博士,鲁本·宾恩斯博士和莉莲·爱德华兹教授在“记住的算法:模型倒置攻击和数据保护法”中的主张,以及法院扩大“个人数据”定义的普遍趋势。根据GDPR第4条第(1)款的规定,包括“与数据相关的所有内容”。
  • Veale和其他人使用计算机科学中的示例来建议某些模型,这些模型在受到某些攻击时会泄露个人数据。因此,Veale等人认为,可以对模型本身行使数据主体权利。
  • 计算机科学的观点以及欧盟法院的判例法被用来争论有效的机器学习模型治理可以在不扩大个人数据范围的情况下实现,以及机器学习模型的治理通过现有的数据保护条款和其他法律领域可以更好地实现。
  • 将个人数据的范围扩展到机器学习模型将使授予黑匣子内的智能努力的保护无效。
更新日期:2020-08-03
down
wechat
bug