当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AXNet: ApproXimate computing using an end-to-end trainable neural network
arXiv - CS - Machine Learning Pub Date : 2018-07-27 , DOI: arxiv-1807.10458
Zhenghao Peng, Xuyang Chen, Chengwen Xu, Naifeng Jing, Xiaoyao Liang, Cewu Lu, Li Jiang

Neural network based approximate computing is a universal architecture promising to gain tremendous energy-efficiency for many error resilient applications. To guarantee the approximation quality, existing works deploy two neural networks (NNs), e.g., an approximator and a predictor. The approximator provides the approximate results, while the predictor predicts whether the input data is safe to approximate with the given quality requirement. However, it is non-trivial and time-consuming to make these two neural network coordinate---they have different optimization objectives---by training them separately. This paper proposes a novel neural network structure---AXNet---to fuse two NNs to a holistic end-to-end trainable NN. Leveraging the philosophy of multi-task learning, AXNet can tremendously improve the invocation (proportion of safe-to-approximate samples) and reduce the approximation error. The training effort also decrease significantly. Experiment results show 50.7% more invocation and substantial cuts of training time when compared to existing neural network based approximate computing framework.

中文翻译:

AXNet:使用端到端可训练神经网络的近似计算

基于神经网络的近似计算是一种通用架构,有望为许多容错应用程序获得巨大的能源效率。为了保证逼近质量,现有工作部署了两个神经网络(NN),例如一个逼近器和一个预测器。逼近器提供近似结果,而预测器预测输入数据是否可以安全地逼近给定的质量要求。然而,使这两个神经网络协调——它们具有不同的优化目标——通过分别训练它们是不平凡且耗时的。本文提出了一种新颖的神经网络结构——AXNet——将两个神经网络融合成一个整体的端到端可训练神经网络。利用多任务学习的理念,AXNet 可以极大地提高调用(安全近似样本的比例)并减少近似误差。训练工作量也显着减少。实验结果表明,与现有的基于神经网络的近似计算框架相比,调用次数增加了 50.7%,训练时间大幅减少。
更新日期:2018-12-19
down
wechat
bug