当前位置: X-MOL 学术Secur. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Collaborative Intelligence: Accelerating Deep Neural Network Inference via Device-Edge Synergy
Security and Communication Networks ( IF 1.968 ) Pub Date : 2020-09-07 , DOI: 10.1155/2020/8831341
Nanliang Shan 1 , Zecong Ye 1 , Xiaolong Cui 1
Affiliation  

With the development of mobile edge computing (MEC), more and more intelligent services and applications based on deep neural networks are deployed on mobile devices to meet the diverse and personalized needs of users. Unfortunately, deploying and inferencing deep learning models on resource-constrained devices are challenging. The traditional cloud-based method usually runs the deep learning model on the cloud server. Since a large amount of input data needs to be transmitted to the server through WAN, it will cause a large service latency. This is unacceptable for most current latency-sensitive and computation-intensive applications. In this paper, we propose Cogent, an execution framework that accelerates deep neural network inference through device-edge synergy. In the Cogent framework, it is divided into two operation stages, including the automatic pruning and partition stage and the containerized deployment stage. Cogent uses reinforcement learning (RL) to automatically predict pruning and partition strategies based on feedback from the hardware configuration and system conditions so that the pruned and partitioned model can better adapt to the system environment and user hardware configuration. Then through containerized deployment to the device and the edge server to accelerate model inference, experiments show that the learning-based hardware-aware automatic pruning and partition scheme can significantly reduce the service latency, and it accelerates the overall model inference process while maintaining accuracy. Using this method can accelerate up to 8.89× without loss of accuracy of more than 7%.

中文翻译:

协作智能:通过设备边缘协同效应加速深层神经网络推理

随着移动边缘计算(MEC)的发展,越来越多的基于深度神经网络的智能服务和应用程序被部署在移动设备上,以满足用户的多样化和个性化需求。不幸的是,在资源受限的设备上部署和推断深度学习模型具有挑战性。传统的基于云的方法通常在云服务器上运行深度学习模型。由于需要通过WAN将大量输入数据传输到服务器,因此会导致较大的服务延迟。这对于大多数当前的延迟敏感型和计算密集型应用程序是不可接受的。在本文中,我们提出了Cogent,这是一个通过设备边缘协同作用来加速深度神经网络推理的执行框架。在Cogent框架中,它分为两个操作阶段,包括自动修剪和分区阶段以及容器化部署阶段。Cogent使用强化学习(RL)根据来自硬件配置和系统状况的反馈自动预测修剪和分区策略,因此修剪和分区的模型可以更好地适应系统环境和用户硬件配置。然后通过容器化部署到设备和边缘服务器以加速模型推理,实验表明基于学习的基于硬件的自动修剪和分区方案可以显着减少服务延迟,并在保持准确性的同时加快了整个模型推理过程。使用此方法可以将速度提高到8.89倍,而精度损失不超过7%。
更新日期:2020-09-08
down
wechat
bug