当前位置: X-MOL 学术IEEE Open J. Comput. Soc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Hardware/Software Co-Design Methodology for Adaptive Approximate Computing in clustering and ANN Learning
IEEE Open Journal of the Computer Society ( IF 5.7 ) Pub Date : 2021-01-14 , DOI: 10.1109/ojcs.2021.3051643
Pengfei Huang , Chenghua Wang , Weiqiang Liu , Fei Qiao , Fabrizio Lombardi

As one of the most promising energy-efficient emerging paradigms for designing digital systems, approximate computing has attracted a significant attention in recent years. Applications utilizing approximate computing (AxC) can tolerate some loss of quality in the computed results for attaining high performance. Approximate arithmetic circuits have been extensively studied; however, their application at system level has not been extensively pursued. Furthermore, when approximate arithmetic circuits are applied at system level, error-accumulation effects and a convergence problem may occur in computation. Multiple approximate components can interact in a typical datapath, hence benefiting from each other. Many applications require more complex datapaths than a single multiplication. In this paper, a hardware/software co-design methodology for adaptive approximate computing is proposed. It makes use of feature constraints to guide the approximate computation at various accuracy levels in each iteration of the learning process in Artificial Neural Networks (ANNs). The proposed adaptive methodology also considers the input operand distribution and the hybrid approximation. Compared with a baseline design, the proposed method significantly reduces the power-delay product while incurring in only a small loss of accuracy. Simulation and a case study of image segmentation validate the effectiveness of the proposed methodology.

中文翻译:

集群和ANN学习中自适应近似计算的软/硬件协同设计方法

作为设计数字系统的最有前途的节能新兴范例之一,近年来,近似计算已引起了广泛的关注。利用近似计算(AxC)的应用程序可以容忍计算结果中的某些质量损失,以获得高性能。近似算术电路已被广泛研究。但是,它们在系统级别的应用尚未得到广泛的追求。此外,当在系统级应用近似算术电路时,在计算中可能出现误差累积效应和收敛问题。多个近似组件可以在典型的数据路径中进行交互,因此彼此受益。许多应用程序需要的数据路径比单个乘法还要复杂。在本文中,提出了一种自适应近似计算的硬件/软件协同设计方法。它利用特征约束在人工神经网络(ANN)的学习过程的每次迭代中以各种精度级别指导近似计算。所提出的自适应方法还考虑了输入操作数分布和混合近似。与基线设计相比,所提出的方法显着降低了功率延迟乘积,同时只造成了很小的精度损失。仿真和图像分割的案例研究验证了所提方法的有效性。所提出的自适应方法还考虑了输入操作数分布和混合近似。与基线设计相比,所提出的方法显着降低了功率延迟乘积,同时只造成了很小的精度损失。仿真和图像分割的案例研究验证了所提方法的有效性。所提出的自适应方法还考虑了输入操作数分布和混合近似。与基线设计相比,所提出的方法显着降低了功率延迟乘积,同时只造成了很小的精度损失。仿真和图像分割的案例研究验证了所提方法的有效性。
更新日期:2021-02-12
down
wechat
bug