当前位置: X-MOL 学术Connect. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CNN Convolutional layer optimisation based on quantum evolutionary algorithm
Connection Science ( IF 3.2 ) Pub Date : 2020-11-23 , DOI: 10.1080/09540091.2020.1841111
Tzyy-Chyang Lu

ABSTRACT

In this paper, a quantum convolutional neural network (CNN) architecture is proposed to find the optimal number of convolutional layers. Since quantum bits use probability to represent binary information, the quantum CNN does not represent the actual network, but the probability of existence of each convolutional layer, thus achieving the aim of training weights and optimising the number of convolutional layers at the same time. In the simulation part, CIFAR-10 (including 50k training images and 10k test images in 10 classes) is used to train VGG-19 and 20-layer, 32-layer, 44-layer and 56-layer CNN networks, and compare the difference between the optimal and non-optimal convolutional layer networks. The simulation results show that without optimisation, the accuracy of the test data drops from approximately 90% to about 80% as the number of network layers increases to 56 layers. However, the CNN with optimisation made it possible to maintain the test accuracy at more than 90%, and the number of network parameters could be reduced by nearly half or more. This shows that the proposed method can not only improve the network performance degradation caused by too many hidden convolutional layers, but also greatly reduce the use of the network’s computing resources.



中文翻译:

基于量子进化算法的CNN卷积层优化

摘要

在本文中,提出了一种量子卷积神经网络(CNN)架构来寻找最佳卷积层数。由于量子比特使用概率来表示二进制信息,因此量子CNN不代表实际网络,而是代表每个卷积层存在的概率,从而同时达到训练权重和优化卷积层数的目的。在仿真部分,使用CIFAR-10(包括10类50k训练图像和10k测试图像)训练VGG-19和20层、32层、44层和56层CNN网络,并比较最优和非最优卷积层网络之间的差异。仿真结果表明,未经优化,随着网络层数增加到 56 层,测试数据的准确率从大约 90% 下降到大约 80%。但是经过优化的CNN使得测试准确率保持在90%以上,网络参数数量可以减少近一半甚至更多。这说明所提出的方法不仅可以改善由于隐藏卷积层过多而导致的网络性能下降,还可以大大减少网络计算资源的使用。

更新日期:2020-11-23
down
wechat
bug