当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Third ArchEdge Workshop: Exploring the Design Space of Efficient Deep Neural Networks
arXiv - CS - Hardware Architecture Pub Date : 2020-11-22 , DOI: arxiv-2011.10912
Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen

This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs). Specifically, we cover two aspects: (1) static architecture design efficiency and (2) dynamic model execution efficiency. For static architecture design, different from existing end-to-end hardware modeling assumptions, we conduct full-stack profiling at the GPU core level to identify better accuracy-latency trade-offs for DNN designs. For dynamic model execution, different from prior work that tackles model redundancy at the DNN-channels level, we explore a new dimension of DNN feature map redundancy to be dynamically traversed at runtime. Last, we highlight several open questions that are poised to draw research attention in the next few years.

中文翻译:

第三届ArchEdge研讨会:探索高效的深度神经网络的设计空间

本文概述了我们正在进行的有效深度神经网络(DNN)设计空间探索方面的工作。具体来说,我们涵盖两个方面:(1)静态体系结构设计效率和(2)动态模型执行效率。对于静态架构设计,不同于现有的端到端硬件建模假设,我们在GPU核心级别进行全栈分析,以为DNN设计确定更好的精度-延迟权衡。对于动态模型执行,与之前在DNN通道级别解决模型冗余的工作不同,我们探索了DNN特征图冗余的新维度,以便在运行时动态遍历。最后,我们重点介绍了几个开放问题,这些问题将在未来几年引起研究关注。
更新日期:2020-11-25
down
wechat
bug