当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RHNAS: Realizable Hardware and Neural Architecture Search
arXiv - CS - Hardware Architecture Pub Date : 2021-06-17 , DOI: arxiv-2106.09180
Yash Akhauri, Adithya Niranjan, J. Pablo Muñoz, Suvadeep Banerjee, Abhijit Davare, Pasquale Cocchini, Anton A. Sorokin, Ravi Iyer, Nilesh Jain

The rapidly evolving field of Artificial Intelligence necessitates automated approaches to co-design neural network architecture and neural accelerators to maximize system efficiency and address productivity challenges. To enable joint optimization of this vast space, there has been growing interest in differentiable NN-HW co-design. Fully differentiable co-design has reduced the resource requirements for discovering optimized NN-HW configurations, but fail to adapt to general hardware accelerator search spaces. This is due to the existence of non-synthesizable (invalid) designs in the search space of many hardware accelerators. To enable efficient and realizable co-design of configurable hardware accelerators with arbitrary neural network search spaces, we introduce RHNAS. RHNAS is a method that combines reinforcement learning for hardware optimization with differentiable neural architecture search. RHNAS discovers realizable NN-HW designs with 1.84x lower latency and 1.86x lower energy-delay product (EDP) on ImageNet and 2.81x lower latency and 3.30x lower EDP on CIFAR-10 over the default hardware accelerator design.

中文翻译:

RHNAS:可实现的硬件和神经架构搜索

快速发展的人工智能领域需要自动化方法来协同设计神经网络架构和神经加速器,以最大限度地提高系统效率并解决生产力挑战。为了实现这一广阔空间的联合优化,人们对可微分 NN-HW 协同设计的兴趣日益浓厚。完全可微协同设计降低了发现优化 NN-HW 配置的资源需求,但无法适应通用硬件加速器搜索空间。这是由于许多硬件加速器的搜索空间中存在不可综合(无效)的设计。为了实现具有任意神经网络搜索空间的可配置硬件加速器的高效和可实现的协同设计,我们引入了 RHNAS。RHNAS 是一种将用于硬件优化的强化学习与可微神经架构搜索相结合的方法。RHNAS 发现了可实现的 NN-HW 设计,与默认硬件加速器设计相比,ImageNet 上的延迟降低了 1.84 倍,能量延迟乘积 (EDP) 降低了 1.86 倍,CIFAR-10 上的延迟降低了 2.81 倍,EDP 降低了 3.30 倍。
更新日期:2021-06-18
down
wechat
bug