当前位置: X-MOL 学术Concurr. Comput. Pract. Exp. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Utilizing ensemble learning for performance and power modeling and improvement of parallel cancer deep learning CANDLE benchmarks
Concurrency and Computation: Practice and Experience ( IF 1.5 ) Pub Date : 2021-07-22 , DOI: 10.1002/cpe.6516
Xingfu Wu 1 , Valerie Taylor 1
Affiliation  

Machine learning (ML) continues to grow in importance across nearly all domains in modeling to learn from data. Often a tradeoff exists between a model's ability to minimize bias and variance. In this article, we utilize ensemble learning to combine linear, nonlinear, and tree-/rule-based ML methods to cope with the bias-variance tradeoff and result in more accurate models. We use the datasets collected for two parallel cancer deep learning CANDLE benchmarks, NT3 and P1B2, to build performance and power models based on hardware performance counters using single-object and multiple-objects ensemble learning to identify the most important counters for improvement on the Cray XC40 Theta at Argonne National Laboratory. Based on the insights from these models, we improve the performance and energy of P1B2 and NT3 by optimizing the deep learning environments TensorFlow, Keras, Horovod, and Python under the huge page size of 8 MB. Experimental results show that ensemble learning not only produces more accurate models but also provides more robust performance counter ranking. We achieve up to 61.15% performance improvement and up to 62.58% energy saving for P1B2 and up to 55.81% performance improvement and up to 52.60% energy saving for NT3 on up to 24,576 cores.

中文翻译:

利用集成学习进行性能和功率建模并改进并行癌症深度学习 CANDLE 基准

机器学习 (ML) 在建模以从数据中学习的几乎所有领域中的重要性持续增长。通常在模型最小化偏差和方差的能力之间存在权衡。在本文中,我们利用集成学习来结合线性、非线性和基于树/规则的 ML 方法,以应对偏差-方差权衡并产生更准确的模型。我们使用为两个并行癌症深度学习 CANDLE 基准测试 NT3 和 P1B2 收集的数据集,基于硬件性能计数器构建性能和功率模型,使用单对象和多对象集成学习来识别最重要的计数器以改进 Cray阿贡国家实验室的 XC40 Theta。基于这些模型的见解,我们通过在 8 MB 的巨大页面大小下优化深度学习环境 TensorFlow、Keras、Horovod 和 Python 来提高 P1B2 和 NT3 的性能和能量。实验结果表明,集成学习不仅可以产生更准确的模型,还可以提供更强大的性能计数器排名。在多达 24,576 个内核上,我们实现了 P1B2 高达 61.15% 的性能提升和高达 62.58% 的节能以及 NT3 高达 55.81% 的性能提升和高达 52.60% 的节能。
更新日期:2021-07-22
down
wechat
bug