当前位置: X-MOL 学术Microprocess. Microsyst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
VLSI implementation of transcendental function hyperbolic tangent for deep neural network accelerators
Microprocessors and Microsystems ( IF 1.9 ) Pub Date : 2021-05-11 , DOI: 10.1016/j.micpro.2021.104270
Gunjan Rajput , Gopal Raut , Mahesh Chandra , Santosh Kumar Vishvakarma

Extensive use of neural network applications prompted researchers to customize a design to speed up their computation based on ASIC implementation. The choice of activation function (AF) in a neural network is an essential requirement. Accurate design architecture of an AF in a digital network faces various challenges as these AF require more hardware resources because of its non-linear nature. This paper proposed an efficient approximation scheme for hyperbolic tangent (tanh) function which purely based on combinational design architecture. The approximation is based on mathematical analysis by considering maximum allowable error in a neural network. The results prove that the proposed combinational design of an AF is efficient in terms of area, power and delay with negligible accuracy loss on MNIST and CIFAR-10 benchmark datasets. Post synthesis results show that the proposed design area is reduced by 66% and delay is reduced by nearly 16% compared to state-of-the-art.



中文翻译:

用于深度神经网络加速器的超越函数双曲正切的超大规模集成电路实现

神经网络应用程序的广泛使用促使研究人员定制设计以加快基于 ASIC 实现的计算。在神经网络中选择激活函数(AF)是必不可少的要求。数字网络中 AF 的精确设计架构面临着各种挑战,因为这些 AF 由于其非线性特性而需要更多的硬件资源。本文提出了一种纯基于组合设计架构的双曲正切 (tanh) 函数的有效逼近方案。该近似基于数学分析,其中考虑了神经网络中的最大允许误差。结果证明,所提出的 AF 组合设计在面积、功率和延迟方面是有效的,在 MNIST 和 CIFAR-10 基准数据集上的精度损失可以忽略不计。

更新日期:2021-05-28
down
wechat
bug