当前位置: X-MOL 学术IEEE Trans. Veh. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
BDFL: A Byzantine-Fault-Tolerance Decentralized Federated Learning Method for Autonomous Vehicle
IEEE Transactions on Vehicular Technology ( IF 6.1 ) Pub Date : 2021-08-04 , DOI: 10.1109/tvt.2021.3102121
Jin-Hua Chen , Min-Rong Chen , Guo-Qiang Zeng , Jiasi Weng

Autonomous Vehicles ($AV$s) take advantage of Machine Learning (ML) for yielding improved experiences of self-driving. However, large-scale collection of $AV$s’ data for training will inevitably result in a privacy leakage problem. Federated Learning (FL) is proposed to solve privacy leakage problems, but it is exposed to security threats such as model inversion, membership inference. Therefore, the vulnerability of the FL should be brought to the forefront when applying to $AV$s. We propose a novel Byzantine-Fault-Tolerant (BFT) decentralized FL method with privacy-preservation for $AV$s called BDFL. In this paper, a Peer-to-Peer (P2P) FL with BFT is built by extending the HydRand protocol. In order to protect their model, each $AV$ uses the Publicly Verifiable Secret Sharing(PVSS) scheme, which allows anyone to verify the correctness of encrypted shares. The evaluation results on the MNIST dataset have shown that introducing decentralized FL into $ AV$ area is feasible, and the proposed BDFL is superior to other BFT-based FL method. Furthermore, the experimental results on KITTI dataset indicate the practicality of BDFL on improving performances of multi-object recognition in $ AV$ areas. Finally, the proposed PVSS-based data privacy preservation scheme is also justified its characteristic of no side-effect on models’ parameters by the experiments on the MNIST and KITTI datasets.

中文翻译:


BDFL:一种用于自动驾驶车辆的拜占庭容错去中心化联邦学习方法



自动驾驶汽车 ($AV$s) 利用机器学习 (ML) 来改善自动驾驶体验。然而,大规模收集$AV$的数据进行训练将不可避免地导致隐私泄露问题。联邦学习(FL)是为了解决隐私泄露问题而提出的,但它面临着模型反转、成员推理等安全威胁。因此,在应用于 $AV$ 时,应将 FL 的脆弱性放在首位。我们提出了一种新颖的拜占庭容错(BFT)去中心化 FL 方法,具有 $AV$ 的隐私保护功能,称为 BDFL。在本文中,通过扩展 HydRand 协议构建了具有 BFT 的点对点(P2P)FL。为了保护他们的模型,每个 $AV$ 使用公开可验证的秘密共享(PVSS)方案,该方案允许任何人验证加密共享的正确性。 MNIST数据集上的评估结果表明,将去中心化FL引入$AV$区域是可行的,并且所提出的BDFL优于其他基于BFT的FL方法。此外,KITTI 数据集上的实验结果表明 BDFL 在提高 $ AV$ 区域多目标识别性能方面的实用性。最后,所提出的基于PVSS的数据隐私保护方案还通过在MNIST和KITTI数据集上的实验证明了其对模型参数无副作用的特性。
更新日期:2021-08-04
down
wechat
bug