当前位置: X-MOL 学术Cognit. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Merging Similar Neurons for Deep Networks Compression
Cognitive Computation ( IF 5.4 ) Pub Date : 2020-01-16 , DOI: 10.1007/s12559-019-09703-6
Guoqiang Zhong , Wenxue Liu , Hui Yao , Tao Li , Jinxuan Sun , Xiang Liu

Deep neural networks have achieved outstanding progress in many fields, such as computer vision, speech recognition and natural language processing. However, large deep neural networks often need huge storage space and long training time, making them difficult to apply to resource restricted devices. In this paper, we propose a method for compressing the structure of deep neural networks. Specifically, we apply clustering analysis to find similar neurons in each layer of the original network, and merge them and the corresponding connections. After the compression of the network, the number of parameters in the deep neural network is significantly reduced, and the required storage space and computational time is greatly reduced as well. We test our method on deep belief network (DBN) and two convolutional neural networks. The experimental results demonstrate that our proposed method can greatly reduce the number of parameters of the deep networks, while keeping their classification accuracy. Especially, on the CIFAR-10 dataset, we have compressed VGGNet with compression ratio 92.96%, and the final model after fine-tuning obtains even higher accuracy than the original model.

中文翻译:

合并类似的神经元进行深度网络压缩

深度神经网络在计算机视觉,语音识别和自然语言处理等许多领域都取得了卓越的进步。但是,大型的深度神经网络通常需要巨大的存储空间和较长的训练时间,这使得它们难以应用于资源受限的设备。在本文中,我们提出了一种压缩深度神经网络结构的方法。具体来说,我们应用聚类分析在原始网络的每一层中找到相似的神经元,并将它们与相应的连接合并。网络压缩后,深度神经网络中的参数数量大大减少,所需的存储空间和计算时间也大大减少。我们在深度信念网络(DBN)和两个卷积神经网络上测试了我们的方法。实验结果表明,本文提出的方法可以在保持分类精度的同时,大大减少深度网络的参数数量。特别是在CIFAR-10数据集上,我们以92.96%的压缩率压缩了VGGNet,经过微调的最终模型比原始模型具有更高的准确性。
更新日期:2020-01-16
down
wechat
bug