当前位置: X-MOL 学术IEEE ACM Trans. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CoEdge: Cooperative DNN Inference With Adaptive Workload Partitioning Over Heterogeneous Edge Devices
IEEE/ACM Transactions on Networking ( IF 3.7 ) Pub Date : 2020-12-16 , DOI: 10.1109/tnet.2020.3042320
Liekang Zeng , Xu Chen , Zhi Zhou , Lei Yang , Junshan Zhang

Recent advances in artificial intelligence have driven increasing intelligent applications at the network edge, such as smart home, smart factory, and smart city. To deploy computationally intensive Deep Neural Networks (DNNs) on resource-constrained edge devices, traditional approaches have relied on either offloading workload to the remote cloud or optimizing computation at the end device locally. However, the cloud-assisted approaches suffer from the unreliable and delay-significant wide-area network, and the local computing approaches are limited by the constrained computing capability. Towards high-performance edge intelligence, the cooperative execution mechanism offers a new paradigm, which has attracted growing research interest recently. In this paper, we propose CoEdge, a distributed DNN computing system that orchestrates cooperative DNN inference over heterogeneous edge devices. CoEdge utilizes available computation and communication resources at the edge and dynamically partitions the DNN inference workload adaptive to devices’ computing capabilities and network conditions. Experimental evaluations based on a realistic prototype show that CoEdge outperforms status-quo approaches in saving energy with close inference latency, achieving up to 25.5% ~ 66.9% energy reduction for four widely-adopted CNN models.

中文翻译:

CoEdge:通过异构边缘设备上的自适应工作负载分区进行协作DNN推理

人工智能的最新进展推动了网络边缘智能应用的增长,例如智能家居,智能工厂和智能城市。为了在资源受限的边缘设备上部署计算密集型深度神经网络(DNN),传统方法依赖于将工作负载卸载到远程云或在本地最终设备上优化计算。但是,云辅助方法受制于不可靠且延迟显着的广域网,并且本地计算方法受到受限的计算能力的限制。对于高性能边缘智能,协作执行机制提供了一种新的范例,最近引起了越来越多的研究兴趣。在本文中,我们提出了CoEdge,一种分布式DNN计算系统,可在异构边缘设备上协调协作DNN推理。CoEdge利用边缘的可用计算和通信资源,并根据设备的计算能力和网络条件动态划分DNN推理工作负载。基于逼真的原型进行的实验评估表明,CoEdge在节省能源方面具有优于现有方法的优势,并且具有紧密的推理延迟,对于四个广泛采用的CNN模型,其能耗降低了25.5%〜66.9%。
更新日期:2020-12-16
down
wechat
bug