当前位置: X-MOL 学术ETRI J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Performance analysis of local exit for distributed deep neural networks over cloud and edge computing
ETRI Journal ( IF 1.4 ) Pub Date : 2020-10-18 , DOI: 10.4218/etrij.2020-0112
Changsik Lee 1 , Seungwoo Hong 1 , Sungback Hong 1 , Taeyeon Kim 1
Affiliation  

In edge computing, most procedures, including data collection, data processing, and service provision, are handled at edge nodes and not in the central cloud. This decreases the processing burden on the central cloud, enabling fast responses to end‐device service requests in addition to reducing bandwidth consumption. However, edge nodes have restricted computing, storage, and energy resources to support computation‐intensive tasks such as processing deep neural network (DNN) inference. In this study, we analyze the effect of models with single and multiple local exits on DNN inference in an edge‐computing environment. Our test results show that a single‐exit model performs better with respect to the number of local exited samples, inference accuracy, and inference latency than a multi‐exit model at all exit points. These results signify that higher accuracy can be achieved with less computation when a single‐exit model is adopted. In edge computing infrastructure, it is therefore more efficient to adopt a DNN model with only one or a few exit points to provide a fast and reliable inference service.

中文翻译:

基于云和边缘计算的分布式深度神经网络本地出口的性能分析

在边缘计算中,大多数过程,包括数据收集,数据处理和服务提供,都是在边缘节点而不是在中央云中处理的。这减轻了中央云的处理负担,除了减少带宽消耗外,还可以快速响应终端设备服务请求。但是,边缘节点的计算,存储和能源资源有限,无法支持诸如处理深度神经网络(DNN)推理之类的计算密集型任务。在这项研究中,我们分析了边缘计算环境中具有单个和多个局部出口的模型对DNN推理的影响。我们的测试结果表明,与所有出口点的多出口模型相比,单出口模型在本地出口样本数量,推理准确性和推理延迟方面表现更好。这些结果表明,采用单出口模型时,只需较少的计算就可以实现更高的精度。因此,在边缘计算基础架构中,采用仅具有一个或几个出口点的DNN模型来提供快速和可靠的推理服务会更加有效。
更新日期:2020-11-18
down
wechat
bug