当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multipath feature recalibration DenseNet for image classification
International Journal of Machine Learning and Cybernetics ( IF 3.1 ) Pub Date : 2020-09-05 , DOI: 10.1007/s13042-020-01194-4
Bolin Chen , Tiesong Zhao , Jiahui Liu , Liqun Lin

Recently, deep neural networks have demonstrated their efficiency in image classification tasks, which are commonly achieved by an extended depth and width of network architecture. However, poor convergence, over-fitting and gradient disappearance might be generated with such comprehensive architectures. Therefore, DenseNet is developed to address these problems. Although DenseNet adopts bottleneck technique in DenseBlocks to avoid relearning feature-maps and decrease parameters, this operation may lead to the skip and loss of important features. Besides, it still takes oversized computational power when the depth and width of the network architecture are increased for better classification. In this paper, we propose a variate of DenseNet, named Multipath Feature Recalibration DenseNet (MFR-DenseNet), to stack convolution layers instead of adopting bottleneck for improving feature extraction. Meanwhile, we build multipath DenseBlocks with Squeeze-Excitation (SE) module to represent the interdependencies of useful feature-maps among different DenseBlocks. Experiments in CIFAR-10, CIFAR-100, MNIST and SVHN reveal the efficiency of our network, with further reduced redundancy whilst maintaining the high accuracy of DenseNet.



中文翻译:

用于图像分类的多路径特征重新校准DenseNet

近年来,深度神经网络已经证明了它们在图像分类任务中的效率,这通常是通过扩展网络结构的深度和宽度来实现的。但是,使用这种全面的体系结构可能会产生较差的收敛性,过度拟合和梯度消失。因此,开发了DenseNet来解决这些问题。尽管DenseNet在DenseBlocks中采用了瓶颈技术来避免重新学习功能图并减少参数,但是此操作可能会导致跳过和丢失重要功能。此外,当网络架构的深度和宽度增加以进行更好的分类时,它仍然会占用超大的计算能力。在本文中,我们提出了DenseNet的变体,称为多路径特征重新校准DenseNet(MFR-DenseNet),堆叠卷积层,而不是采用瓶颈来改善特征提取。同时,我们使用挤压激励(SE)模块构建多路径DenseBlock,以表示不同DenseBlock之间有用的特征图的相互依赖性。在CIFAR-10,CIFAR-100,MNIST和SVHN上进行的实验揭示了我们网络的效率,并在保持DenseNet高精度的同时进一步减少了冗余。

更新日期:2020-09-06
down
wechat
bug