当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generating Efficient DNN-Ensembles with Evolutionary Computation
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-09-18 , DOI: arxiv-2009.08698
Marc Ortiz, Florian Scheidegger, Marc Casas, Cristiano Malossi, Eduard Ayguad\'e

In this work, we leverage ensemble learning as a tool for the creation of faster, smaller, and more accurate deep learning models. We demonstrate that we can jointly optimize for accuracy, inference time, and the number of parameters by combining DNN classifiers. To achieve this, we combine multiple ensemble strategies: bagging, boosting, and an ordered chain of classifiers. To reduce the number of DNN ensemble evaluations during the search, we propose EARN, an evolutionary approach that optimizes the ensemble according to three objectives regarding the constraints specified by the user. We run EARN on 10 image classification datasets with an initial pool of 32 state-of-the-art DCNN on both CPU and GPU platforms, and we generate models with speedups up to $7.60\times$, reductions of parameters by $10\times$, or increases in accuracy up to $6.01\%$ regarding the best DNN in the pool. In addition, our method generates models that are $5.6\times$ faster than the state-of-the-art methods for automatic model generation.

中文翻译:

使用进化计算生成高效的 DNN 集成

在这项工作中,我们利用集成学习作为创建更快、更小、更准确的深度学习模型的工具。我们证明我们可以通过结合 DNN 分类器来联合优化准确性、推理时间和参数数量。为了实现这一点,我们结合了多种集成策略:bagging、boosting 和分类器的有序链。为了减少搜索过程中 DNN 集成评估的数量,我们提出了 EARN,这是一种进化方法,可根据用户指定的约束条件的三个目标优化集成。我们在 10 个图像分类数据集上运行 EARN,在 CPU 和 GPU 平台上使用 32 个最先进的 DCNN 初始池,我们生成的模型加速高达 7.60 美元,参数减少 10 美元,或将准确度提高至 6 美元。01\%$ 关于池中最好的 DNN。此外,我们的方法生成的模型比最先进的自动模型生成方法快 5.6 倍。
更新日期:2020-09-21
down
wechat
bug