当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights Flipping
arXiv - CS - Machine Learning Pub Date : 2020-03-16 , DOI: arxiv-2003.07469
Jiaxiong Qiu, Cai Chen, Shuaicheng Liu, Bing Zeng

The channel redundancy in feature maps of convolutional neural networks (CNNs) results in the large consumption of memories and computational resources. In this work, we design a novel Slim Convolution (SlimConv) module to boost the performance of CNNs by reducing channel redundancies. Our SlimConv consists of three main steps: Reconstruct, Transform and Fuse, through which the features are splitted and reorganized in a more efficient way, such that the learned weights can be compressed effectively. In particular, the core of our model is a weight flipping operation which can largely improve the feature diversities, contributing to the performance crucially. Our SlimConv is a plug-and-play architectural unit which can be used to replace convolutional layers in CNNs directly. We validate the effectiveness of SlimConv by conducting comprehensive experiments on ImageNet, MS COCO2014, Pascal VOC2012 segmentation, and Pascal VOC2007 detection datasets. The experiments show that SlimConv-equipped models can achieve better performances consistently, less consumption of memory and computation resources than non-equipped conterparts. For example, the ResNet-101 fitted with SlimConv achieves 77.84% top-1 classification accuracy with 4.87 GFLOPs and 27.96M parameters on ImageNet, which shows almost 0.5% better performance with about 3 GFLOPs and 38% parameters reduced.

中文翻译:

SlimConv:通过权重翻转减少卷积神经网络中的通道冗余

卷积神经网络 (CNN) 特征图中的通道冗余导致内存和计算资源的大量消耗。在这项工作中,我们设计了一种新颖的 Slim Convolution (SlimConv) 模块,通过减少通道冗余来提高 CNN 的性能。我们的 SlimConv 包括三个主要步骤:重构、变换和融合,通过这些步骤,特征以更有效的方式进行拆分和重组,从而可以有效地压缩学习到的权重。特别是,我们模型的核心是权重翻转操作,它可以在很大程度上改善特征多样性,对性能至关重要。我们的 SlimConv 是一个即插即用的架构单元,可用于直接替换 CNN 中的卷积层。我们通过对 ImageNet、MS COCO2014、Pascal VOC2012 分割和 Pascal VOC2007 检测数据集进行综合实验来验证 SlimConv 的有效性。实验表明,与未配备的模型相比,配备 SlimConv 的模型可以始终如一地获得更好的性能,更少的内存和计算资源消耗。例如,配备 SlimConv 的 ResNet-101 在 ImageNet 上以 4.87 GFLOP 和 27.96M 参数实现了 77.84% 的 top-1 分类准确率,这表明性能提高了近 0.5%,减少了约 3 GFLOP 和 38% 的参数。与未配备的同类产品相比,内存和计算资源的消耗更少。例如,配备 SlimConv 的 ResNet-101 在 ImageNet 上以 4.87 GFLOP 和 27.96M 参数实现了 77.84% 的 top-1 分类准确率,这表明性能提高了近 0.5%,减少了约 3 GFLOP 和 38% 的参数。与未配备的同类产品相比,内存和计算资源的消耗更少。例如,配备 SlimConv 的 ResNet-101 在 ImageNet 上以 4.87 GFLOP 和 27.96M 参数实现了 77.84% 的 top-1 分类准确率,这表明性能提高了近 0.5%,减少了约 3 GFLOP 和 38% 的参数。
更新日期:2020-03-18
down
wechat
bug