当前位置: X-MOL 学术J. Syst. Archit. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An open source framework based on Kafka-ML for Distributed DNN inference over the Cloud-to-Things continuum
Journal of Systems Architecture ( IF 4.5 ) Pub Date : 2021-06-16 , DOI: 10.1016/j.sysarc.2021.102214
Daniel R. Torres , Cristian Martín , Bartolomé Rubio , Manuel Díaz

The current dependency of Artificial Intelligence (AI) systems on Cloud computing implies higher transmission latency and bandwidth consumption. Moreover, it challenges the real-time monitoring of physical objects, e.g., the Internet of Things (IoT). Edge systems bring computing closer to end devices and support time-sensitive applications. However, Edge systems struggle with state-of-the-art Deep Neural Networks (DNN) due to computational resource limitations. This paper proposes a technology framework that combines the Edge-Cloud architecture concept with BranchyNet advantages to support fault-tolerant and low-latency AI predictions. The implementation and evaluation of this framework allow assessing the benefits of running Distributed DNN (DDNN) in the Cloud-to-Things continuum. Compared to a Cloud-only deployment, the results obtained show an improvement of 45.34% in the response time. Furthermore, this proposal presents an extension for Kafka-ML that reduces rigidness over the Cloud-to-Things continuum managing and deploying DDNN.



中文翻译:

基于 Kafka-ML 的开源框架,用于在 Cloud-to-Things 连续体上进行分布式 DNN 推理

当前人工智能 (AI) 系统对云计算的依赖意味着更高的传输延迟和带宽消耗。此外,它挑战了物理对象的实时监控,例如物联网 (IoT)。边缘系统使计算更接近终端设备并支持时间敏感的应用程序。然而,由于计算资源的限制,边缘系统与最先进的深度神经网络 (DNN) 斗争。本文提出了一种技术框架,将 Edge-Cloud 架构概念与 BranchyNet 优势相结合,以支持容错和低延迟的 AI 预测。该框架的实施和评估允许评估在 Cloud-to-Things 连续体中运行分布式 DNN (DDNN) 的好处。与仅云部署相比,获得的结果显示响应时间提高了 45.34%。此外,该提案提出了 Kafka-ML 的扩展,可降低 Cloud-to-Things 连续体管理和部署 DDNN 的刚性。

更新日期:2021-06-29
down
wechat
bug