当前位置: X-MOL 学术Int. J. Artif. Intell. Tools › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sparse Deep Neural Network Optimization for Embedded Intelligence
International Journal on Artificial Intelligence Tools ( IF 1.0 ) Pub Date : 2020-06-17 , DOI: 10.1142/s0218213020600027
Jia Bi 1 , Steve R. Gunn 1
Affiliation  

Deep neural networks become more popular as its ability to solve very complex pattern recognition problems. However, deep neural networks often need massive computational and memory resources, which is main reason resulting them to be difficult efficiently and entirely running on embedded platforms. This work addresses this problem by saving the computational and memory requirements of deep neural networks by proposing a variance reduced (VR)-based optimization with regularization techniques to compress the requirements of memory of models within fast training process. It is shown theoretically and experimentally that sparsity-inducing regularization can be effectively worked with the VR-based optimization whereby in the optimizer the behaviors of the stochastic element is controlled by a hyper-parameter to solve non-convex problems.

中文翻译:

面向嵌入式智能的稀疏深度神经网络优化

深度神经网络因其解决非常复杂的模式识别问题的能力而变得越来越流行。然而,深度神经网络通常需要大量的计算和内存资源,这是导致它们难以在嵌入式平台上高效和完全运行的主要原因。这项工作通过使用正则化技术来压缩模型的内存需求,从而节省深度神经网络的计算和内存需求,从而解决了这个问题。理论和实验表明,稀疏诱导正则化可以有效地与基于 VR 的优化一起工作,其中在优化器中,随机元素的行为由超参数控制以解决非凸问题。
更新日期:2020-06-17
down
wechat
bug