当前位置: X-MOL 学术IEEE Trans. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed Deep Convolutional Neural Networks for the Internet-of-Things
IEEE Transactions on Computers ( IF 3.7 ) Pub Date : 2021-02-25 , DOI: 10.1109/tc.2021.3062227
Simone Disabato , Manuel Roveri , Cesare Alippi

Severe constraints on memory and computation characterizing the Internet-of-Things (IoT) units may prevent the execution of Deep Learning (DL)-based solutions, which typically demand large memory and high processing load. In order to support a real-time execution of the considered DL model at the IoT unit level, DL solutions must be designed having in mind constraints on memory and processing capability exposed by the chosen IoT technology. In this article, we introduce a design methodology aiming at allocating the execution of Convolutional Neural Networks (CNNs) on a distributed IoT application. Such a methodology is formalized as an optimization problem where the latency between the data-gathering phase and the subsequent decision-making one is minimized, within the given constraints on memory and processing load at the units level. The methodology supports multiple sources of data as well as multiple CNNs in execution on the same IoT system allowing the design of CNN-based applications demanding autonomy, low decision-latency, and high Quality-of-Service.

中文翻译:

用于物联网的分布式深度卷积神经网络

对表征物联网 (IoT) 单元的内存和计算的严重限制可能会阻止基于深度学习 (DL) 的解决方案的执行,这些解决方案通常需要大内存和高处理负载。为了支持在 IoT 单元级别实时执行所考虑的 DL 模型,DL 解决方案的设计必须考虑到所选 IoT 技术对内存和处理能力的限制。在本文中,我们介绍了一种设计方法,旨在在分布式 IoT 应用程序上分配卷积神经网络 (CNN) 的执行。这种方法被形式化为一个优化问题,其中在单元级别的内存和处理负载的给定约束范围内,数据收集阶段和后续决策阶段之间的延迟被最小化。
更新日期:2021-02-25
down
wechat
bug