当前位置: X-MOL 学术Int. Data Priv. Law › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
They who must not be identified—distinguishing personal from non-personal data under the GDPR
International Data Privacy Law ( IF 2.500 ) Pub Date : 2020-02-01 , DOI: 10.1093/idpl/ipz026
Michèle Finck , Frank Pallas 1
Affiliation  

In this article, we examine the concept of non-personal data from a law and computer science perspective. The delineation between personal data and non-personal data is of paramount importance to determine the GDPR’s scope of application. This exercise is, however, fraught with difficulty, also when it comes to de-personalised data – that is to say data that once was personal data but has been manipulated with the goal of turning it into anonymous data. This article charts that the legal definition of anonymous data is subject to uncertainty. Indeed, the definitions adopted in the GDPR, by the Article 29 Working Party and by national supervisory authorities diverge significantly. Whereas the GDPR admits that there can be a remaining risk of identification even in relation to anonymous data, others have insisted that no such risk is acceptable. A review of the technical underpinnings of anonymisation that is subsequently applied to two concrete case studies involving personal data used on blockchains, we conclude that there always remains a residual risk when anonymisation is used. The concluding section links this conclusion more generally to the notion of risk in the GDPR.

中文翻译:

不得被识别的人——根据 GDPR 区分个人数据和非个人数据

在本文中,我们从法律和计算机科学的角度研究非个人数据的概念。区分个人数据和非个人数据对于确定 GDPR 的适用范围至关重要。然而,这项工作充满了困难,在涉及去个性化数据时也是如此——也就是说,曾经是个人数据但为了将其变成匿名数据而被操纵的数据。本文说明匿名数据的法律定义存在不确定性。事实上,GDPR、第 29 条工作组和国家监管机构在 GDPR 中采用的定义存在显着差异。尽管 GDPR 承认即使与匿名数据有关的身份识别风险也可能存在,但其他人坚持认为这种风险是不可接受的。回顾匿名化的技术基础,随后应用于两个涉及区块链上使用的个人数据的具体案例研究,我们得出结论,使用匿名化时总是存在残余风险。结论部分更广泛地将此结论与 GDPR 中的风险概念联系起来。
更新日期:2020-02-01
down
wechat
bug