当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2020-02-25 , DOI: 10.1109/tpami.2020.2975796
Hongyang Gao , Zhengyang Wang , Lei Cai , Shuiwang Ji

Convolutional neural networks (CNNs) have shown great capability of solving various artificial intelligence tasks. However, the increasing model size has raised challenges in employing them in resource-limited applications. In this work, we propose to compress deep models by using channel-wise convolutions, which replace dense connections among feature maps with sparse ones in CNNs. Based on this novel operation, we build light-weight CNNs known as ChannelNets. ChannelNets use three instances of channel-wise convolutions; namely group channel-wise convolutions, depth-wise separable channel-wise convolutions, and the convolutional classification layer. Compared to prior CNNs designed for mobile devices, ChannelNets achieve a significant reduction in terms of the number of parameters and computational cost without loss in accuracy. Notably, our work represents an attempt to compress the fully-connected classification layer, which usually accounts for about 25 percent of total parameters in compact CNNs. Along this new direction, we investigate the behavior of our proposed convolutional classification layer and conduct detailed analysis. Based on our in-depth analysis, we further propose convolutional classification layers without weight-sharing. This new classification layer achieves a good trade-off between fully-connected classification layers and the convolutional classification layer. Experimental results on the ImageNet dataset demonstrate that ChannelNets achieve consistently better performance compared to prior methods.

中文翻译:


ChannelNets:通过通道卷积实现紧凑且高效的卷积神经网络



卷积神经网络(CNN)已显示出解决各种人工智能任务的强大能力。然而,不断增加的模型尺寸给在资源有限的应用程序中使用它们带来了挑战。在这项工作中,我们建议使用通道卷积来压缩深度模型,用 CNN 中的稀疏连接取代特征图之间的密集连接。基于这种新颖的操作,我们构建了称为 ChannelNet 的轻量级 CNN。 ChannelNet 使用三个通道卷积实例;即分组通道卷积、深度可分离通道卷积和卷积分类层。与之前为移动设备设计的 CNN 相比,ChannelNet 在不损失精度的情况下显着减少了参数数量和计算成本。值得注意的是,我们的工作代表了压缩全连接分类层的尝试,该分类层通常占紧凑型 CNN 中总参数的 25% 左右。沿着这个新方向,我们研究了我们提出的卷积分类层的行为并进行了详细分析。基于我们的深入分析,我们进一步提出了没有权重共享的卷积分类层。这个新的分类层在全连接分类层和卷积分类层之间实现了良好的权衡。 ImageNet 数据集上的实验结果表明,与之前的方法相比,ChannelNet 始终获得更好的性能。
更新日期:2020-02-25
down
wechat
bug