当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SensitiveNets: Learning Agnostic Representations with Application to Face Images
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 8-10-2020 , DOI: 10.1109/tpami.2020.3015420
Aythami Morales , Julian Fierrez , Ruben Vera-Rodriguez , Ruben Tolosana

This work proposes a novel privacy-preserving neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data. The new international regulation for personal data protection forces data controllers to guarantee privacy and avoid discriminative hazards while managing sensitive data of users. In our approach, privacy and discrimination are related to each other. Instead of existing approaches aimed directly at fairness improvement, the proposed feature representation enforces the privacy of selected attributes. This way fairness is not the objective, but the result of a privacy-preserving learning method. This approach guarantees that sensitive information cannot be exploited by any agent who process the output of the model, ensuring both privacy and equality of opportunity. Our method is based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective. The method is evaluated on three different primary tasks (identity, attractiveness, and smiling) and three publicly available benchmarks. In addition, we present a new face annotation dataset with balanced distribution between genders and ethnic origins. The experiments demonstrate that it is possible to improve the privacy and equality of opportunity while retaining competitive performance independently of the task.

中文翻译:


SensitiveNets:学习不可知表示并应用于人脸图像



这项工作提出了一种新颖的隐私保护神经网络特征表示,以抑制学习空间的敏感信息,同时保持数据的实用性。新的个人数据保护国际法规迫使数据控制者在管理用户敏感数据的同时保证隐私并避免歧视性危害。在我们的方法中,隐私和歧视是相互关联的。所提出的特征表示不是直接旨在提高公平性的现有方法,而是强制执行所选属性的隐私。这样公平就不是目标,而是保护隐私的学习方法的结果。这种方法保证敏感信息不能被处理模型输出的任何代理利用,从而确保隐私和机会平等。我们的方法基于对抗性正则化器,该正则化器在学习目标中引入了敏感信息去除函数。该方法根据三个不同的主要任务(身份、吸引力和微笑)和三个公开可用的基准进行评估。此外,我们还提出了一个新的面部标注数据集,其性别和种族起源之间分布均衡。实验表明,可以改善隐私和机会平等,同时保持独立于任务的竞争表现。
更新日期:2024-08-22
down
wechat
bug