当前位置: X-MOL 学术arXiv.cs.PF › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MLPerf Training Benchmark
arXiv - CS - Performance Pub Date : 2019-10-02 , DOI: arxiv-1910.01500
Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojyoti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St. John, Tsuguchika Tabaru, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, Matei Zaharia

Machine learning (ML) needs industry-standard performance benchmarks to support design and competitive evaluation of the many emerging software and hardware solutions for ML. But ML training presents three unique benchmarking challenges absent from other domains: optimizations that improve training throughput can increase the time to solution, training is stochastic and time to solution exhibits high variance, and software and hardware systems are so diverse that fair benchmarking with the same binary, code, and even hyperparameters is difficult. We therefore present MLPerf, an ML benchmark that overcomes these challenges. Our analysis quantitatively evaluates MLPerf's efficacy at driving performance and scalability improvements across two rounds of results from multiple vendors.

中文翻译:

MLPerf 培训基准

机器学习 (ML) 需要行业标准的性能基准来支持针对 ML 的许多新兴软件和硬件解决方案的设计和竞争评估。但是 ML 训练提出了其他领域所没有的三个独特的基准测试挑战:提高训练吞吐量的优化可以增加解决时间,训练是随机的,解决时间表现出很大的差异,软件和硬件系统如此多样化,以至于公平的基准测试二进制、代码,甚至超参数都很困难。因此,我们提出了 MLPerf,这是一种克服这些挑战的 ML 基准测试。我们的分析从多个供应商的两轮结果中定量评估了 MLPerf 在推动性能和可扩展性改进方面的功效。
更新日期:2020-03-03
down
wechat
bug