当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multiclass Oblique Random Forests With Dual-Incremental Learning Capacity.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2020-01-24 , DOI: 10.1109/tnnls.2020.2964737
Zheng Chai , Chunhui Zhao

Oblique random forests (ObRFs) have attracted increasing attention recently. Their popularity is mainly driven by learning oblique hyperplanes instead of expensively searching for axis-aligned hyperplanes in the standard random forest. However, most existing methods are trained in an off-line mode, which assumes that the training data are given as a batch. Efficient dual-incremental learning (DIL) strategies for ObRF have rarely been explored when new inputs from the existing classes or unseen classes come. The goal of this article is to provide an ObRF with DIL capacity to perform classification on-the-fly. First, we propose a batch multiclass ObRF (ObRF-BM) algorithm by using a broad learning system and a multi-to-binary method to obtain an optimal oblique hyperplane in a higher dimensional space and then separate the samples into two supervised clusters at each node, which provides the basis for the following incremental learning strategy. Then, the DIL strategy for ObRF-BM, termed ObRF-DIL, is developed by analytically updating the parameters of all nodes on the classification route of the increment of input samples and the increment of input classes so that the ObRF-BM model can be effectively updated without laborious retraining from scratch. Experimental results using several public data sets demonstrate the superiority of the proposed approach in comparison with several state-of-the-art methods.

中文翻译:

具有双重增量学习能力的多类倾斜随机森林。

倾斜的随机森林(ObRFs)最近引起了越来越多的关注。它们的普及主要是通过学习倾斜超平面而不是在标准随机森林中昂贵地搜索与轴对齐的超平面来实现的。但是,大多数现有方法都是在离线模式下进行训练的,该模式假定训练数据是成批提供的。当现有班级或看不见的班级有新的投入时,很少有针对ObRF的有效的双增量学习(DIL)策略。本文的目的是为ObRF提供DIL功能,以便即时进行分类。第一,我们提出了一种批处理多类ObRF(ObRF-BM)算法,方法是使用广泛的学习系统和多二进制方法在更高维度的空间中获得最佳的倾斜超平面,然后将样本分成每个节点的两个有监督的群集,这为以下增量学习策略提供了基础。然后,通过对输入样本增量和输入类别增量的分类路径上的所有节点的参数进行解析更新,来开发用于ObRF-BM的DIL策略,即ObRF-DIL,从而可以建立ObRF-BM模型。有效地更新,而无需从头开始进行繁琐的培训。使用多个公共数据集的实验结果证明了与几种最新方法相比,该方法的优越性。这为以下增量学习策略提供了基础。然后,通过对输入样本增量和输入类别增量的分类路径上的所有节点的参数进行解析更新,来开发用于ObRF-BM的DIL策略,即ObRF-DIL,从而可以建立ObRF-BM模型。有效更新,而无需从头进行繁琐的培训。使用多个公共数据集的实验结果证明了与几种最新方法相比,该方法的优越性。这为以下增量学习策略提供了基础。然后,通过对输入样本增量和输入类别增量的分类路径上的所有节点的参数进行解析更新,来开发用于ObRF-BM的DIL策略,即ObRF-DIL,从而可以建立ObRF-BM模型。有效更新,而无需从头进行繁琐的培训。使用多个公共数据集的实验结果证明了与几种最新方法相比,该方法的优越性。通过对输入样本增量和输入类别增量的分类路径上的所有节点的参数进行分析更新,开发出ObRF-BM模型,从而可以有效地更新ObRF-BM模型,而无需从头开始进行费力的重新训练。使用多个公共数据集的实验结果证明了与几种最新方法相比,该方法的优越性。通过对输入样本增量和输入类别增量的分类路径上的所有节点的参数进行分析更新,开发出ObRF-BM模型,从而可以有效地更新ObRF-BM模型,而无需从头开始进行费力的重新训练。使用多个公共数据集的实验结果证明了与几种最新方法相比,该方法的优越性。
更新日期:2020-01-24
down
wechat
bug