当前位置: X-MOL 学术Secur. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploiting the Relationship between Pruning Ratio and Compression Effect for Neural Network Model Based on TensorFlow
Security and Communication Networks Pub Date : 2020-04-30 , DOI: 10.1155/2020/5218612
Bo Liu 1 , Qilin Wu 1 , Yiwen Zhang 2 , Qian Cao 1
Affiliation  

Pruning is a method of compressing the size of a neural network model, which affects the accuracy and computing time when the model makes a prediction. In this paper, the hypothesis that the pruning proportion is positively correlated with the compression scale of the model but not with the prediction accuracy and calculation time is put forward. For testing the hypothesis, a group of experiments are designed, and MNIST is used as the data set to train a neural network model based on TensorFlow. Based on this model, pruning experiments are carried out to investigate the relationship between pruning proportion and compression effect. For comparison, six different pruning proportions are set, and the experimental results confirm the above hypothesis.

中文翻译:

基于TensorFlow的神经网络模型的修剪率与压缩效果之间的关系开发

修剪是一种压缩神经网络模型大小的方法,当模型进行预测时,这会影响准确性和计算时间。提出了修剪比例与模型压缩比例呈正相关,与预测精度和计算时间不相关的假设。为了检验假设,设计了一组实验,并将MNIST用作数据集以训练基于TensorFlow的神经网络模型。在此模型的基础上,进行了修剪实验,研究了修剪比例与压缩效果之间的关系。为了进行比较,设置了六个不同的修剪比例,实验结果证实了上述假设。
更新日期:2020-04-30
down
wechat
bug