当前位置:
X-MOL 学术
›
arXiv.cs.AR
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SONIC: A Sparse Neural Network Inference Accelerator with Silicon Photonics for Energy-Efficient Deep Learning
arXiv - CS - Hardware Architecture Pub Date : 2021-09-09 , DOI: arxiv-2109.04459 Febin Sunny, Mahdi Nikdast, Sudeep Pasricha
arXiv - CS - Hardware Architecture Pub Date : 2021-09-09 , DOI: arxiv-2109.04459 Febin Sunny, Mahdi Nikdast, Sudeep Pasricha
Sparse neural networks can greatly facilitate the deployment of neural
networks on resource-constrained platforms as they offer compact model sizes
while retaining inference accuracy. Because of the sparsity in parameter
matrices, sparse neural networks can, in principle, be exploited in accelerator
architectures for improved energy-efficiency and latency. However, to realize
these improvements in practice, there is a need to explore sparsity-aware
hardware-software co-design. In this paper, we propose a novel silicon
photonics-based sparse neural network inference accelerator called SONIC. Our
experimental analysis shows that SONIC can achieve up to 5.8x better
performance-per-watt and 8.4x lower energy-per-bit than state-of-the-art sparse
electronic neural network accelerators; and up to 13.8x better
performance-per-watt and 27.6x lower energy-per-bit than the best known
photonic neural network accelerators.
中文翻译:
SONIC:用于节能深度学习的硅光子稀疏神经网络推理加速器
稀疏神经网络可以极大地促进神经网络在资源受限平台上的部署,因为它们提供紧凑的模型尺寸,同时保持推理精度。由于参数矩阵的稀疏性,原则上可以在加速器架构中利用稀疏神经网络来提高能效和延迟。然而,要在实践中实现这些改进,需要探索稀疏感知的软硬件协同设计。在本文中,我们提出了一种新型的基于硅光子学的稀疏神经网络推理加速器,称为 SONIC。我们的实验分析表明,与最先进的稀疏电子神经网络加速器相比,SONIC 可以实现高达 5.8 倍的每瓦性能和 8.4 倍的低每比特能量;每瓦性能提升高达 13.8 倍和 27。
更新日期:2021-09-10
中文翻译:
SONIC:用于节能深度学习的硅光子稀疏神经网络推理加速器
稀疏神经网络可以极大地促进神经网络在资源受限平台上的部署,因为它们提供紧凑的模型尺寸,同时保持推理精度。由于参数矩阵的稀疏性,原则上可以在加速器架构中利用稀疏神经网络来提高能效和延迟。然而,要在实践中实现这些改进,需要探索稀疏感知的软硬件协同设计。在本文中,我们提出了一种新型的基于硅光子学的稀疏神经网络推理加速器,称为 SONIC。我们的实验分析表明,与最先进的稀疏电子神经网络加速器相比,SONIC 可以实现高达 5.8 倍的每瓦性能和 8.4 倍的低每比特能量;每瓦性能提升高达 13.8 倍和 27。