当前位置: X-MOL 学术Big Data Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Performance Benchmarking of Parallel Hyperparameter Tuning for Deep Learning Based Tornado Predictions
Big Data Research ( IF 3.3 ) Pub Date : 2021-02-11 , DOI: 10.1016/j.bdr.2021.100212
Jonathan N. Basalyga , Carlos A. Barajas , Matthias K. Gobbert , Jianwu Wang

Predicting violent storms and dangerous weather conditions with current models can take a long time due to the immense complexity associated with weather simulation. Machine learning has the potential to classify tornadic weather patterns much more rapidly, thus allowing for more timely alerts to the public. To deal with class imbalance challenges in machine learning, different data augmentation approaches have been proposed. In this work, we examine the wall time difference between live data augmentation methods versus the use of preaugmented data when they are used in a convolutional neural network based training for tornado prediction. We also compare CPU and GPU based training over varying sizes of augmented data sets. Additionally we examine what impact varying the number of GPUs used for training will produce given a convolutional neural network on wall time and accuracy. We conclude that using multiple GPUs to train a single network has no significant advantage over using a single GPU. The number of GPUs used during training should be kept as small as possible for maximum search throughput as the native Keras multi-GPU model provides little speedup with optimal learning parameters.



中文翻译:

基于深度学习的龙卷风预测的并行超参数调整的性能基准

由于与天气模拟相关的巨大复杂性,使用当前模型预测暴风雨和危险天气状况可能会花费很长时间。机器学习有可能更快地对龙卷风的天气类型进行分类,从而可以向公众提供更及时的警报。为了应对机器学习中的班级不平衡挑战,已经提出了不同的数据增强方法。在这项工作中,我们研究了在基于卷积神经网络的龙卷风预测训练中使用实时数据增强方法与使用预增强数据之间的时间差。我们还比较了不同大小的扩充数据集上基于CPU和GPU的训练。此外,我们研究了在给定卷积神经网络时间和准确性的情况下,改变用于训练的GPU数量会产生什么影响。我们得出的结论是,与使用单个GPU相比,使用多个GPU训练单个网络没有明显的优势。训练过程中使用的GPU数量应保持尽可能少,以实现最大的搜索吞吐量,因为本机Keras多GPU模型提供的最佳学习参数几乎无法提高速度。

更新日期:2021-02-19
down
wechat
bug