当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GraphChallenge.org Sparse Deep Neural Network Performance
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-03-25 , DOI: arxiv-2004.01181
Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Albert Reuther, Ryan Robinett, Sid Samsi

The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems. The sparse DNN challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment. In 2019 several sparse DNN challenge submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art sparse DNN execution time, $T_{\rm DNN}$, is a strong function of the number of DNN operations performed, $N_{\rm op}$. The sparse DNN challenge provides a clear picture of current sparse DNN systems and underscores the need for new innovations to achieve high performance on very large sparse DNNs.

中文翻译:

GraphChallenge.org 稀疏深度神经网络性能

MIT/IEEE/Amazon GraphChallenge.org 鼓励社区开发用于分析图形和稀疏数据的新解决方案的方法。稀疏 AI 分析存在独特的可扩展性困难。稀疏深度神经网络 (DNN) 挑战借鉴了机器学习、高性能计算和视觉分析方面的先前挑战,创造了一个反映新兴稀疏 AI 系统的挑战。稀疏 DNN 挑战基于数学上定义明确的 DNN 推理计算,可以在任何编程环境中实现。2019 年,收到了来自广泛作者和组织的几份稀疏 DNN 挑战提交。本文对这些提交中表现最佳的进行了性能分析。这些提交表明,他们最先进的稀疏 DNN 执行时间 $T_{\rm DNN}$,是执行的 DNN 操作数量的强函数,$N_{\rm op}$。稀疏 DNN 挑战提供了当前稀疏 DNN 系统的清晰画面,并强调需要进行新的创新以在非常大的稀疏 DNN 上实现高性能。
更新日期:2020-04-08
down
wechat
bug