当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial interference and its mitigations in privacy-preserving collaborative machine learning
Nature Machine Intelligence ( IF 23.8 ) Pub Date : 2021-09-17 , DOI: 10.1038/s42256-021-00390-3
Dmitrii Usynin 1, 2, 3 , Daniel Rueckert 1, 4 , Ben Glocker 1 , Georgios Kaissis 1, 2, 3, 4 , Jonathan Passerat-Palmbach 1, 3, 5 , Alexander Ziller 2, 3, 4 , Marcus Makowski 2 , Rickmer Braren 2
Affiliation  

Despite the rapid increase of data available to train machine-learning algorithms in many domains, several applications suffer from a paucity of representative and diverse data. The medical and financial sectors are, for example, constrained by legal, ethical, regulatory and privacy concerns preventing data sharing between institutions. Collaborative learning systems, such as federated learning, are designed to circumvent such restrictions and provide a privacy-preserving alternative by eschewing data sharing and relying instead on the distributed remote execution of algorithms. However, such systems are susceptible to malicious adversarial interference attempting to undermine their utility or divulge confidential information. Here we present an overview and analysis of current adversarial attacks and their mitigations in the context of collaborative machine learning. We discuss the applicability of attack vectors to specific learning contexts and attempt to formulate a generic foundation for adversarial influence and mitigation mechanisms. We moreover show that a number of context-specific learning conditions are exploited in similar fashion across all settings. Lastly, we provide a focused perspective on open challenges and promising areas of future research in the field.



中文翻译:

隐私保护协作机器学习中的对抗性干扰及其缓解措施

尽管可用于在许多领域训练机器学习算法的数据迅速增加,但一些应用程序仍缺乏代表性和多样化的数据。例如,医疗和金融部门受到法律、道德、监管和隐私问题的限制,阻碍了机构之间的数据共享。协作学习系统(例如联邦学习)旨在规避此类限制,并通过避免数据共享并依赖算法的分布式远程执行来提供保护隐私的替代方案。然而,此类系统容易受到企图破坏其效用或泄露机密信息的恶意对抗性干扰。在这里,我们概述和分析了当前对抗性攻击及其在协作机器学习背景下的缓解措施。我们讨论了攻击向量对特定学习环境的适用性,并尝试为对抗性影响和缓解机制制定通用基础。此外,我们还表明,在所有设置中都以类似的方式利用了许多特定于上下文的学习条件。最后,我们重点介绍了该领域未来研究的开放挑战和有前景的领域。此外,我们还表明,在所有设置中都以类似的方式利用了许多特定于上下文的学习条件。最后,我们重点介绍了该领域未来研究的开放挑战和有前景的领域。此外,我们还表明,在所有设置中都以类似的方式利用了许多特定于上下文的学习条件。最后,我们重点介绍了该领域未来研究的开放挑战和有前景的领域。

更新日期:2021-09-17
down
wechat
bug