当前位置: X-MOL 学术J. Intell. Fuzzy Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A cross-entropy based stacking method in ensemble learning
Journal of Intelligent & Fuzzy Systems ( IF 1.7 ) Pub Date : 2020-07-06 , DOI: 10.3233/jifs-200600
Weimin Ding 1, 2 , Shengli Wu 1, 3
Affiliation  

Stacking is one of the major types of ensemble learning techniques in which a set of base classifiers contributes their outputs to the meta-level classifier, and the meta-level classifier combines them so as to produce more accurate classifications. In this paper, we propose a new stacking algorithm that defines the cross-entropy as the loss function for the classification problem. The training process is conducted by using a neural network with the stochastic gradient descent technique. One major characteristic of our method is its treatment of each meta instance as a whole with one optimization model, which is different from some other stacking methods such as stacking with multi-response linear regression and stacking with multi-response model trees. In these methods each meta instance is divided into a set of sub-instances. Multiple models apply to those sub-instances and each for a class label. There is no connection between different models. It is very likely that our treatment is a better choice for finding suitable weights. Experiments with 22 data sets from the UCI machine learning repository show that the proposed stacking approach performs well. It outperforms all three base classifiers, several state-of-the-art stacking algorithms, and some other representative ensemble learning methods on average.

中文翻译:

集成学习中基于交叉熵的叠加方法

堆叠是集成学习技术的主要类型之一,其中一组基本分类器将其输出贡献给元级别的分类器,元级别的分类器将它们组合起来以产生更准确的分类。在本文中,我们提出了一种新的堆叠算法,该算法将交叉熵定义为分类问题的损失函数。训练过程是通过使用具有随机梯度下降技术的神经网络来进行的。我们方法的一个主要特点是使用一个优化模型将每个元实例作为一个整体进行处理,这与其他一些堆叠方法(例如,使用多响应线性回归进行堆叠和使用多响应模型树进行堆叠)不同。在这些方法中,每个元实例都分为一组子实例。多个模型适用于这些子实例,每个模型均用于类标签。不同型号之间没有连接。我们的治疗很有可能是找到合适体重的更好选择。来自UCI机器学习存储库的22个数据集的实验表明,所提出的堆叠方法表现良好。它的性能优于所有三个基本分类器,几种最先进的堆叠算法以及其他一些代表性的整体学习方法。
更新日期:2020-07-07
down
wechat
bug