当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Structured Bayesian Compression for Deep Models in Mobile-Enabled Devices for Connected Healthcare
IEEE NETWORK ( IF 9.3 ) Pub Date : 2019-09-04 , DOI: 10.1109/mnet.001.1900204
Sijia Chen , Bin Song , Xiaojiang Du , Nadra Guizani

Deep models, typically deep neural networks, have millions of parameters, analyze medical data accurately, yet in a time-consuming method. However, energy cost effectiveness and computational efficiency are important for prerequisites developing and deploying mobile-enabled devices, the mainstream trend in connected healthcare. Therefore, deep models’ compression has become a problem of great significance for real-time health services. In this article, we first emphasize the use of Bayesian learning for model sparsity, effectively reducing the number of parameters while maintaining model performance. Specifically, with sparsity inducing priors, large parts of the network can be pruned with a simple retraining of arbitrary datasets. Then, we propose a novel structured Bayesian compression architecture by adaptively learning both group sparse and block sparse while also designing sparse-oriented mixture priors to improve the expandability of the compression model. Experimental results from both simulated datasets (MNIST) as well as practical medical datasets (Histopathologic Cancer) demonstrate the effectiveness and good performance of our framework on deep model compression.

中文翻译:

用于连接医疗保健的移动设备中的深层模型的结构化贝叶斯压缩

深度模型(通常是深度神经网络)具有数百万个参数,可以用一种耗时的方法准确地分析医学数据。但是,能源成本效率和计算效率对于开发和部署支持移动设备的先决条件非常重要,这是互联医疗的主流趋势。因此,深度模型的压缩已成为实时卫生服务的重要问题。在本文中,我们首先强调使用贝叶斯学习进行模型稀疏性,在保持模型性能的同时有效地减少参数数量。具体而言,利用稀疏诱导先验,可以通过对任意数据集的简单重新训练来修剪网络的大部分。然后,我们通过自适应地学习分组稀疏和块稀疏,同时设计先验稀疏的混合先验以提高压缩模型的可扩展性,提出了一种新颖的贝叶斯压缩结构。来自模拟数据集(MNIST)和实际医学数据集(Histopathologic Cancer)的实验结果证明了我们的框架在深度模型压缩方面的有效性和良好的性能。
更新日期:2020-04-22
down
wechat
bug