当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Parameter Distribution Balanced CNNs.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2020-01-15 , DOI: 10.1109/tnnls.2019.2956390
Lixin Liao , Yao Zhao , Shikui Wei , Yunchao Wei , Jingdong Wang

Convolutional neural network (CNN) is the primary technique that has greatly promoted the development of computer vision technologies. However, there is little research on how to allocate parameters in different convolution layers when designing CNNs. We research mainly on revealing the relationship between CNN parameter distribution, i.e., the allocation of parameters in convolution layers, and the discriminative performance of CNN. Unlike previous works, we do not append more elements into the network, such as more convolution layers or denser short connections. We focus on enhancing the discriminative performance of CNN through varying its parameter distribution under strict size constraint. We propose an energy function to represent the CNN parameter distribution, which establishes the connection between the allocation of parameters and the discriminative performance of CNN. Extensive experiments with shallow CNNs on three public image classification data sets demonstrate that the CNN parameter distribution with a higher energy value will promote the model to obtain better performance. According to the motivated observation, the problem of finding the optimal parameter distribution can be transformed into an optimization problem of finding the biggest energy value. We present a simple yet effective guideline that uses balanced parameter distribution to design CNNs. Extensive experiments on ImageNet with three popular backbones, i.e., AlexNet, ResNet34, and ResNet101, demonstrate that the proposed guideline can make consistent improvements upon different baselines under strict size constraint.

中文翻译:

参数分布平衡的CNN。

卷积神经网络(CNN)是极大地促进了计算机视觉技术发展的主要技术。但是,在设计CNN时,如何在不同卷积层中分配参数的研究很少。我们主要研究揭示CNN参数分布(即卷积层中参数的分配)与CNN判别性能之间的关系。与以前的作品不同,我们不会在网络中添加更多元素,例如更多卷积层或更密集的短连接。我们专注于通过在严格的大小约束下更改其CNN的参数分布来提高其判别性能。我们提出一个能量函数来表示CNN参数分布,它建立了参数分配与CNN判别性能之间的联系。在三个公共图像分类数据集上进行浅层CNN的广泛实验表明,具有较高能量值的CNN参数分布将促进模型获得更好的性能。根据有动机的观察,找到最佳参数分布的问题可以转化为找到最大能量值的优化问题。我们提出了一个简单而有效的指南,该指南使用平衡的参数分布来设计CNN。在具有三个流行的主干网AlexNet,AlexNet,ResNet34和ResNet101的ImageNet上进行的大量实验表明,所提出的准则可以在严格的尺寸约束下对不同的基线进行一致的改进。
更新日期:2020-01-15
down
wechat
bug