当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A survey on security and privacy of federated learning
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2020-10-10 , DOI: 10.1016/j.future.2020.10.007
Viraaji Mothukuri , Reza M. Parizi , Seyedamin Pouriyeh , Yan Huang , Ali Dehghantanha , Gautam Srivastava

Federated learning (FL) is a new breed of Artificial Intelligence (AI) that builds upon decentralized data and training that brings learning to the edge or directly on-device. FL is a new research area often referred to as a new dawn in AI, is in its infancy, and has not yet gained much trust in the community, mainly because of its (unknown) security and privacy implications. To advance the state of the research in this area and to realize extensive utilization of the FL approach and its mass adoption, its security and privacy concerns must be first identified, evaluated, and documented. FL is preferred in use-cases where security and privacy are the key concerns and having a clear view and understanding of risk factors enable an implementer/adopter of FL to successfully build a secure environment and gives researchers a clear vision on possible research areas. This paper aims to provide a comprehensive study concerning FL’s security and privacy aspects that can help bridge the gap between the current state of federated AI and a future in which mass adoption is possible. We present an illustrative description of approaches and various implementation styles with an examination of the current challenges in FL and establish a detailed review of security and privacy concerns that need to be considered in a thorough and clear context. Findings from our study suggest that overall there are fewer privacy-specific threats associated with FL compared to security threats. The most specific security threats currently are communication bottlenecks, poisoning, and backdoor attacks while inference-based attacks are the most critical to the privacy of FL. We conclude the paper with much needed future research directions to make FL adaptable in realistic scenarios.

中文翻译:

联邦学习安全与隐私调查

联邦学习 (FL) 是一种新型人工智能 (AI),它建立在去中心化数据和培训的基础上,将学习带到边缘或直接在设备上进行。 FL 是一个新的研究领域,通常被称为人工智能的新曙光,目前还处于起步阶段,尚未获得社区的太多信任,主要是因为其(未知的)安全和隐私影响。为了推进该领域的研究现状并实现 FL 方法的广泛利用和大规模采用,必须首先识别、评估和记录其安全和隐私问题。在安全和隐私是关键问题的用例中,FL 是首选,并且对风险因素有清晰的认识和理解,使 FL 的实施者/采用者能够成功构建安全的环境,并为研究人员提供对可能的研究领域的清晰愿景。本文旨在提供有关 FL 安全和隐私方面的全面研究,有助于弥合联邦人工智能的现状与可能大规模采用的未来之间的差距。我们对方法和各种实施方式进行了说明性描述,并检查了 FL 当前的挑战,并对需要在彻底和清晰的背景下考虑的安全和隐私问题进行了详细审查。我们的研究结果表明,总体而言,与安全威胁相比,与 FL 相关的特定于隐私的威胁较少。目前最具体的安全威胁是通信瓶颈、中毒和后门攻击,而基于推理的攻击对 FL 的隐私最为关键。我们在本文的结尾提出了未来急需的研究方向,以使 FL 能够适应现实场景。
更新日期:2020-10-10
down
wechat
bug