当前位置:
X-MOL 学术
›
arXiv.cs.CR
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Membership Inference Attacks on Deep Regression Models for Neuroimaging
arXiv - CS - Cryptography and Security Pub Date : 2021-05-06 , DOI: arxiv-2105.02866 Umang Gupta, Dmitris Stripelis, Pradeep K. Lam, Paul M. Thompson, José Luis Ambite, Greg Ver Steeg
arXiv - CS - Cryptography and Security Pub Date : 2021-05-06 , DOI: arxiv-2105.02866 Umang Gupta, Dmitris Stripelis, Pradeep K. Lam, Paul M. Thompson, José Luis Ambite, Greg Ver Steeg
Ensuring the privacy of research participants is vital, even more so in
healthcare environments. Deep learning approaches to neuroimaging require large
datasets, and this often necessitates sharing data between multiple sites,
which is antithetical to the privacy objectives. Federated learning is a
commonly proposed solution to this problem. It circumvents the need for data
sharing by sharing parameters during the training process. However, we
demonstrate that allowing access to parameters may leak private information
even if data is never directly shared. In particular, we show that it is
possible to infer if a sample was used to train the model given only access to
the model prediction (black-box) or access to the model itself (white-box) and
some leaked samples from the training data distribution. Such attacks are
commonly referred to as Membership Inference attacks. We show realistic
Membership Inference attacks on deep learning models trained for 3D
neuroimaging tasks in a centralized as well as decentralized setup. We
demonstrate feasible attacks on brain age prediction models (deep learning
models that predict a person's age from their brain MRI scan). We correctly
identified whether an MRI scan was used in model training with a 60% to over
80% success rate depending on model complexity and security assumptions.
更新日期:2021-05-07