当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimization of Convolutional Neural Network Using the Linearly Decreasing Weight Particle Swarm Optimization
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-01-16 , DOI: arxiv-2001.05670
T. Serizawa, H. Fujita

Convolutional neural network (CNN) is one of the most frequently used deep learning techniques. Various forms of models have been proposed and improved for learning at CNN. When learning with CNN, it is necessary to determine the optimal hyperparameters. However, the number of hyperparameters is so large that it is difficult to do it manually, so much research has been done on automation. A method that uses metaheuristic algorithms is attracting attention in research on hyperparameter optimization. Metaheuristic algorithms are naturally inspired and include evolution strategies, genetic algorithms, antcolony optimization and particle swarm optimization. In particular, particle swarm optimization converges faster than genetic algorithms, and various models have been proposed. In this paper, we propose CNN hyperparameter optimization with linearly decreasing weight particle swarm optimization (LDWPSO). In the experiment, the MNIST data set and CIFAR-10 data set, which are often used as benchmark data sets, are used. By optimizing CNN hyperparameters with LDWPSO, learning the MNIST and CIFAR-10 datasets, we compare the accuracy with a standard CNN based on LeNet-5. As a result, when using the MNIST dataset, the baseline CNN is 94.02% at the 5th epoch, compared to 98.95% for LDWPSO CNN, which improves accuracy. When using the CIFAR-10 dataset, the Baseline CNN is 28.07% at the 10th epoch, compared to 69.37% for the LDWPSO CNN, which greatly improves accuracy.

中文翻译:

使用线性递减权重粒子群优化的卷积神经网络优化

卷积神经网络(CNN)是最常用的深度学习技术之一。已经提出并改进了各种形式的模型用于 CNN 的学习。用CNN学习时,需要确定最优的超参数。然而,超参数的数量如此之多,以至于很难手动完成,因此在自动化方面做了很多研究。一种使用元启发式算法的方法正在引起超参数优化研究的关注。元启发式算法受到自然启发,包括进化策略、遗传算法、蚁群优化和粒子群优化。特别是粒子群优化比遗传算法收敛得更快,并且已经提出了各种模型。在本文中,我们提出了具有线性递减权重粒子群优化 (LDWPSO) 的 CNN 超参数优化。实验中使用了经常作为基准数据集的MNIST数据集和CIFAR-10数据集。通过使用 LDWPSO 优化 CNN 超参数,学习 MNIST 和 CIFAR-10 数据集,我们将准确性与基于 LeNet-5 的标准 CNN 进行了比较。因此,当使用 MNIST 数据集时,基线 CNN 在第 5 个时期为 94.02%,而 LDWPSO CNN 为 98.95%,这提高了准确性。使用 CIFAR-10 数据集时,Baseline CNN 在第 10 个 epoch 时为 28.07%,而 LDWPSO CNN 为 69.37%,这大大提高了准确性。通过使用 LDWPSO 优化 CNN 超参数,学习 MNIST 和 CIFAR-10 数据集,我们将准确性与基于 LeNet-5 的标准 CNN 进行了比较。因此,当使用 MNIST 数据集时,基线 CNN 在第 5 个时期为 94.02%,而 LDWPSO CNN 为 98.95%,这提高了准确性。使用 CIFAR-10 数据集时,Baseline CNN 在第 10 个 epoch 时为 28.07%,而 LDWPSO CNN 为 69.37%,这大大提高了准确性。通过使用 LDWPSO 优化 CNN 超参数,学习 MNIST 和 CIFAR-10 数据集,我们将准确性与基于 LeNet-5 的标准 CNN 进行了比较。因此,当使用 MNIST 数据集时,基线 CNN 在第 5 个时期为 94.02%,而 LDWPSO CNN 为 98.95%,这提高了准确性。使用 CIFAR-10 数据集时,Baseline CNN 在第 10 个 epoch 时为 28.07%,而 LDWPSO CNN 为 69.37%,这大大提高了准确性。
更新日期:2020-09-18
down
wechat
bug