当前位置: X-MOL 学术Sādhanā › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Design and investigation of low-complexity Anurupyena Vedic multiplier for machine learning applications
Sādhanā ( IF 1.4 ) Pub Date : 2020-11-03 , DOI: 10.1007/s12046-020-01500-4
Santhosh Kumar Parameswaran , Gowrishankar Chinnusamy

The current world of computers is based on machine leaning and profound learning towards artificial intelligence. In recent investigations, parallelisms are used to solve difficult problems. For the implementation of the FPGA, new architectures have been built using design techniques VLSI and parallel computing technologies. Research on reconfigurable computing, machine learning and signal processing should be constantly monitored in the development of artificial intelligence. Energy-restricted computer devices should be continuously developed to support algorithms in the machine learning process. In machine learning algorithms, multipliers and adders play a significant role. In ALU, Convolutionary Neural Network (CNN) and Deep Neural Networks (DNN), the multiplier is an energy-consuming factor of signal processing. In this project, for the DNN, the high-speed Vedic multiplier has been introduced. The versions of the parallel–parallel (PP), serial–parallel (SP) and two-speed (TS) multipliers are compared to the standard 64-, 32- and 16-bit models. The results are obtained for an Intel Cyclone V 5CSEMA5U23C6 FPGA, using the Intel Quartus 17.0 software suite.



中文翻译:

用于机器学习应用的低复杂度Anurupyena Vedic乘法器的设计和研究

当前的计算机世界基于机器学习和对人工智能的深入学习。在最近的研究中,并行性用于解决难题。为了实现FPGA,使用设计技术VLSI和并行计算技术构建了新的体系结构。在人工智能的发展中,应不断监控可重构计算,机器学习和信号处理的研究。能源受限的计算机设备应不断开发,以支持机器学习过程中的算法。在机器学习算法中,乘法器和加法器起着重要作用。在ALU,卷积神经网络(CNN)和深层神经网络(DNN)中,乘数是信号处理的能耗因素。在本项目中,对于DNN,引入了高速吠陀乘数。将并行-并行(PP),串行-并行(SP)和两速(TS)乘法器的版本与标准的64位,32位和16位模型进行了比较。使用Intel Quartus 17.0软件套件,可获得针对Intel Cyclone V 5CSEMA5U23C6 FPGA的结果。

更新日期:2020-11-03
down
wechat
bug