当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-03-04 , DOI: arxiv-2103.03078
William Paul, Yinzhi Cao, Miaomiao Zhang, Phil Burlina

Machine learning (ML) models used in medical imaging diagnostics can be vulnerable to a variety of privacy attacks, including membership inference attacks, that lead to violations of regulations governing the use of medical data and threaten to compromise their effective deployment in the clinic. In contrast to most recent work in privacy-aware ML that has been focused on model alteration and post-processing steps, we propose here a novel and complementary scheme that enhances the security of medical data by controlling the data sharing process. We develop and evaluate a privacy defense protocol based on using a generative adversarial network (GAN) that allows a medical data sourcer (e.g. a hospital) to provide an external agent (a modeler) a proxy dataset synthesized from the original images, so that the resulting diagnostic systems made available to model consumers is rendered resilient to privacy attackers. We validate the proposed method on retinal diagnostics AI used for diabetic retinopathy that bears the risk of possibly leaking private information. To incorporate concerns of both privacy advocates and modelers, we introduce a metric to evaluate privacy and utility performance in combination, and demonstrate, using these novel and classical metrics, that our approach, by itself or in conjunction with other defenses, provides state of the art (SOTA) performance for defending against privacy attacks.

中文翻译:

使用生成方法捍卫针对隐私攻击的医学图像诊断

医学影像诊断中使用的机器学习(ML)模型可能容易受到各种隐私攻击(包括成员推断攻击)的攻击,这些攻击会导致违反管理医学数据使用的法规,并有可能损害其在临床中的有效部署。与专注于模型更改和后处理步骤的隐私感知ML的最新工作相反,我们在此提出一种新颖且互补的方案,该方案通过控制数据共享过程来增强医学数据的安全性。我们基于生成对抗网络(GAN)开发和评估隐私防御协议,该协议允许医学数据源(例如医院)向外部代理(建模者)提供从原始图像合成的代理数据集,从而使可用于模型消费者的最终诊断系统具有抵御隐私攻击者的能力。我们验证了用于糖尿病性视网膜病的视网膜诊断AI的建议方法,该方法具有可能泄露私人信息的风险。为了兼顾隐私权倡导者和建模者的关注,我们引入了一种结合评估隐私和实用程序性能的指标,并使用这些新颖和经典的指标证明了我们的方法本身或与其他防御措施共同提供了状态信息。抵御隐私攻击的艺术(SOTA)性能。
更新日期:2021-03-05
down
wechat
bug