当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Learning With Spiking Neural Networks
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2021-10-21 , DOI: 10.1109/tsp.2021.3121632
Yeshwanth Venkatesha , Youngeun Kim , Leandros Tassiulas , Priyadarshini Panda

As neural networks get widespread adoption in resource-constrained embedded devices, there is a growing need for low-power neural systems. Spiking Neural Networks (SNNs) are emerging to be an energy-efficient alternative to the traditional Artificial Neural Networks (ANNs) which are known to be computationally intensive. From an application perspective, as federated learning involves multiple energy-constrained devices, there is a huge scope to leverage energy efficiency provided by SNNs. Despite its importance, there has been little attention on training SNNs on a large-scale distributed system like federated learning. In this paper, we bring SNNs to a more realistic federated learning scenario. Specifically, we design a federated learning method for training decentralized and privacy preserving SNNs. To validate the proposed method, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks. We observe that SNNs outperform ANNs in terms of overall accuracy by over 15% when the data is distributed across a large number of clients in the federation while providing up to $4.3\times$ energy efficiency. In addition to efficiency, we also analyze the sensitivity of the proposed federated SNN framework to data distribution among the clients, stragglers, and gradient noise and perform a comprehensive comparison with ANNs. The source code is available at https://github.com/Intelligent-Computing-Lab-Yale/FedSNN .

中文翻译:

使用尖峰神经网络进行联合学习

随着神经网络在资源受限的嵌入式设备中得到广泛采用,对低功耗神经系统的需求不断增长。尖峰神经网络 (SNN) 正在成为传统人工神经网络 (ANN) 的节能替代方案,而人工神经网络 (ANN) 以计算密集型着称。从应用的角度来看,由于联邦学习涉及多个能源受限的设备,因此利用 SNN 提供的能源效率有很大的空间。尽管它很重要,但很少有人关注在像联邦学习这样的大规模分布式系统上训练 SNN。在本文中,我们将 SNN 引入了更现实的联邦学习场景。具体来说,我们设计了一种联合学习方法来训练分散和隐私保护的 SNN。为了验证所提出的方法,我们使用 CIFAR10 和 CIFAR100 基准通过实验评估 SNN 在联邦学习的各个方面的优势。我们观察到,当数据分布在联邦中的大量客户端同时提供高达$4.3\times$能源效率。除了效率之外,我们还分析了所提出的联合 SNN 框架对客户端、落后者和梯度噪声之间的数据分布的敏感性,并与 ANN 进行了全面比较。源代码可在https://github.com/Intelligent-Computing-Lab-Yale/FedSNN .
更新日期:2021-11-19
down
wechat
bug