当前位置: X-MOL 学术Proc. IEEE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trusted AI in Multiagent Systems: An Overview of Privacy and Security for Distributed Learning
Proceedings of the IEEE ( IF 20.6 ) Pub Date : 2023-09-14 , DOI: 10.1109/jproc.2023.3306773
Chuan Ma 1 , Jun Li 2 , Kang Wei 1 , Bo Liu 3 , Ming Ding 4 , Long Yuan 5 , Zhu Han 6 , H. Vincent Poor 7
Affiliation  

Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs. Then, the processed information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one and brings about significant benefits. However, this new distributed ML paradigm raises new risks in terms of privacy and security issues. In this article, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e., we consider the following levels: 1) the level of preprocessed data; 2) the level of learning models; 3) the level of extracted knowledge; and 4) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of current state-of-the-art attack mechanisms and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.

中文翻译:

多代理系统中的可信人工智能:分布式学习的隐私和安全概述

受分布式终端用户设备 (UE) 计算能力不断提高以及对共享私有数据日益关注的推动,最近人们对可在机器学习 (ML) 和人工智能 (AI) 上进行处理的机器学习 (ML) 和人工智能 (AI) 产生了极大的兴趣。分布式UE。具体来说,在此范例中,部分 ML 过程被外包给多个分布式 UE。然后,处理后的信息在中央服务器上进行一定级别的聚合,这将集中式机器学习过程转变为分布式过程,并带来显着的好处。然而,这种新的分布式机器学习范式在隐私和安全问题方面带来了新的风险。在本文中,我们从信息交换级别的独特角度对分布式机器学习新兴的安全和隐私风险进行了调查,它们是根据机器学习过程的关键步骤来定义的,即我们考虑以下级别:1)预处理数据的级别;2)学习模型的水平;3)提取知识的水平;4)中间结果的水平。我们根据当前最先进的攻击机制的概述,探索和分析每个信息交换级别的潜在威胁,然后讨论针对此类威胁的可能防御方法。最后,我们通过对该关键领域的挑战和未来研究的可能方向进行展望来完成调查。我们根据当前最先进的攻击机制的概述,探索和分析每个信息交换级别的潜在威胁,然后讨论针对此类威胁的可能防御方法。最后,我们通过对该关键领域的挑战和未来研究的可能方向进行展望来完成调查。我们根据当前最先进的攻击机制的概述,探索和分析每个信息交换级别的潜在威胁,然后讨论针对此类威胁的可能防御方法。最后,我们通过对该关键领域的挑战和未来研究的可能方向进行展望来完成调查。
更新日期:2023-09-15
down
wechat
bug