当前位置: X-MOL 学术ACM Trans. Internet Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Time-Efficient Ensemble Learning with Sample Exchange for Edge Computing
ACM Transactions on Internet Technology ( IF 5.3 ) Pub Date : 2021-06-16 , DOI: 10.1145/3409265
Wu Chen 1 , Yong Yu 1 , Keke Gai 2 , Jiamou Liu 3 , Kim-Kwang Raymond Choo 4
Affiliation  

In existing ensemble learning algorithms (e.g., random forest), each base learner’s model needs the entire dataset for sampling and training. However, this may not be practical in many real-world applications, and it incurs additional computational costs. To achieve better efficiency, we propose a decentralized framework: Multi-Agent Ensemble. The framework leverages edge computing to facilitate ensemble learning techniques by focusing on the balancing of access restrictions (small sub-dataset) and accuracy enhancement. Specifically, network edge nodes (learners) are utilized to model classifications and predictions in our framework. Data is then distributed to multiple base learners who exchange data via an interaction mechanism to achieve improved prediction. The proposed approach relies on a training model rather than conventional centralized learning. Findings from the experimental evaluations using 20 real-world datasets suggest that Multi-Agent Ensemble outperforms other ensemble approaches in terms of accuracy even though the base learners require fewer samples (i.e., significant reduction in computation costs).

中文翻译:

具有用于边缘计算的样本交换的高效集成学习

在现有的集成学习算法(例如随机森林)中,每个基学习器的模型都需要整个数据集进行采样和训练。然而,这在许多实际应用中可能并不实用,并且会产生额外的计算成本。为了提高效率,我们提出了一个分散的框架:多代理集成。该框架利用边缘计算通过关注访问限制(小型子数据集)和准确性增强的平衡来促进集成学习技术。具体来说,网络边缘节点(学习者)用于在我们的框架中对分类和预测进行建模。然后将数据分发给多个基础学习者,这些基础学习者通过交互机制交换数据以实现改进的预测。所提出的方法依赖于训练模型而不是传统的集中学习。使用 20 个真实世界数据集的实验评估结果表明,即使基础学习器需要更少的样本(即显着降低计算成本),Multi-Agent Ensemble 在准确性方面也优于其他集成方法。
更新日期:2021-06-16
down
wechat
bug