当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transparent Learning: An Incremental Machine Learning Framework Based on Transparent Computing
IEEE NETWORK ( IF 9.3 ) Pub Date : 2018-01-26 , DOI: 10.1109/mnet.2018.1700154
Kehua Guo , Zhonghe Liang , Ronghua Shi , Chao Hu , Zuoyong Li

In the Internet of Things environment, the capabilities of various clients are being developed in the direction of networking and intellectualization. How to develop the clients' capability from that of only collecting and displaying data to that of possessing intelligence has been a critical issue. In recent years, machine learning has become a representative technology in client intellectualization and is now attracting growing interest. In machine learning, massive computing, including data preprocessing and training, requires substantial computing resources; however, lightweight clients usually do not have strong computing capability. To solve this problem, we introduce the advantage of transparent computing (TC) for the client intellectualization framework and propose an incremental machine learning framework named transparent learning (TL), where training tasks are moved from lightweight clients to servers and edge devices. After training, test models are transmitted to clients and updated with incremental training. In this study, a cache strategy is designed to divide the training set in order to optimize the performance. We choose deep learning as the performance evaluation case, and conduct several TensorFlow-based experiments to demonstrate the efficiency of the framework.

中文翻译:

透明学习:基于透明计算的增量式机器学习框架

在物联网环境中,各种客户端的功能正在朝着网络化和智能化的方向发展。如何将客户的能力从仅收集和显示数据的能力发展到拥有智能的能力,已成为一个关键问题。近年来,机器学习已成为客户端智能化中的代表技术,并且现在正吸引着越来越多的兴趣。在机器学习中,海量计算,包括数据预处理和培训,需要大量的计算资源。但是,轻量级客户端通常不具有强大的计算能力。为了解决这个问题,我们为客户端智能化框架介绍了透明计算(TC)的优势,并提出了一个名为透明学习(TL)的增量式机器学习框架,该框架将训练任务从轻量级客户端转移到服务器和边缘设备。训练后,测试模型将传输给客户,并通过增量训练进行更新。在本研究中,设计了一种缓存策略来划分训练集,以优化性能。我们选择深度学习作为性能评估案例,并进行一些基于TensorFlow的实验以证明该框架的效率。缓存策略旨在划分训练集,以优化性能。我们选择深度学习作为性能评估案例,并进行一些基于TensorFlow的实验以证明该框架的效率。缓存策略旨在划分训练集,以优化性能。我们选择深度学习作为性能评估案例,并进行一些基于TensorFlow的实验以证明该框架的效率。
更新日期:2018-01-30
down
wechat
bug