当前位置: X-MOL 学术IISE Trans. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Tensor decomposition to compress convolutional layers in deep learning
IISE Transactions ( IF 2.6 ) Pub Date : 2021-04-16 , DOI: 10.1080/24725854.2021.1894514
Yinan Wang 1 , Weihong “Grace” Guo 2 , Xiaowei Yue 1
Affiliation  

Abstract

Feature extraction for tensor data serves as an important step in many tasks such as anomaly detection, process monitoring, image classification, and quality control. Although many methods have been proposed for tensor feature extraction, there are still two challenges that need to be addressed: (i) how to reduce the computation cost for high dimensional and large volume tensor data; (ii) how to interpret the output features and evaluate their significance. The most recent methods in deep learning, such as Convolutional Neural Network, have shown outstanding performance in analyzing tensor data, but their wide adoption is still hindered by model complexity and lack of interpretability. To fill this research gap, we propose to use CP-decomposition to approximately compress the convolutional layer (CPAC-Conv layer) in deep learning. The contributions of our work include three aspects: (i) we adapt CP-decomposition to compress convolutional kernels and derive the expressions of forward and backward propagations for our proposed CPAC-Conv layer; (ii) compared with the original convolutional layer, the proposed CPAC-Conv layer can reduce the number of parameters without decaying prediction performance. It can combine with other layers to build novel Deep Neural Networks; (iii) the value of decomposed kernels indicates the significance of the corresponding feature map, which provides us with insights to guide feature selection.



中文翻译:

张量分解以压缩深度学习中的卷积层

摘要

张量数据的特征提取是异常检测、过程监控、图像分类和质量控制等许多任务中的重要步骤。尽管已经提出了许多用于张量特征提取的方法,但仍有两个挑战需要解决:(i)如何降低高维和大容量张量数据的计算成本;(ii) 如何解释输出特征并评估其重要性。深度学习中最新的方法,例如卷积神经网络,在分析张量数据方面表现出出色的性能,但它们的广泛采用仍然受到模型复杂性和缺乏可解释性的阻碍。为了填补这一研究空白,我们建议使用 CP 分解来近似压缩深度学习中的卷积层(CPAC-Conv 层)。我们工作的贡献包括三个方面:(i)我们采用 CP 分解来压缩卷积核,并为我们提出的 CPAC-Conv 层推导出前向和后向传播的表达式;(ii) 与原始卷积层相比,所提出的 CPAC-Conv 层可以减少参数数量而不会降低预测性能。它可以与其他层结合构建新颖的深度神经网络;(iii) 分解核的值表明了相应特征图的重要性,这为我们提供了指导特征选择的见解。所提出的 CPAC-Conv 层可以在不降低预测性能的情况下减少参数数量。它可以与其他层结合构建新颖的深度神经网络;(iii) 分解核的值表明了相应特征图的重要性,这为我们提供了指导特征选择的见解。所提出的 CPAC-Conv 层可以在不降低预测性能的情况下减少参数数量。它可以与其他层结合构建新颖的深度神经网络;(iii) 分解核的值表明了相应特征图的重要性,这为我们提供了指导特征选择的见解。

更新日期:2021-04-16
down
wechat
bug