当前位置: X-MOL 学术Int. J. Inf. Technol. Decis. Mak. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A New Adaptive Weighted Deep Forest and Its Modifications
International Journal of Information Technology & Decision Making ( IF 2.5 ) Pub Date : 2020-05-31 , DOI: 10.1142/s0219622020500236
Lev V. Utkin 1 , Andrei V. Konstantinov 1 , Viacheslav S. Chukanov 1 , Anna A. Meldo 2
Affiliation  

A new adaptive weighted deep forest algorithm which can be viewed as a modification of the confidence screening mechanism is proposed. The main idea underlying the algorithm is based on adaptive weigting of every training instance at each cascade level of the deep forest. The confidence screening mechanism for the deep forest proposed by Pang et al., strictly removes instances from training and testing processes to simplify the whole algorithm in accordance with the obtained random forest class probability distributions. This strict removal may lead to a very small number of training instances at the next levels of the deep forest cascade. The presented modification is more flexible and assigns weights to instances in order to differentiate their use in building decision trees at every level of the deep forest cascade. It overcomes the main disadvantage of the confidence screening mechanism. The proposed modification is similar to the AdaBoost algorithm to some extent. Numerical experiments illustrate the outperformance of the proposed modification in comparison with the original deep forest. It is also illustrated how the proposed algorithm can be extended for solving the transfer learning and distance metric learning problems.

中文翻译:

一种新的自适应加权深森林及其改进

提出了一种新的自适应加权深度森林算法,可以看作是对置信度筛选机制的修改。该算法的主要思想是基于深度森林每个级联级别的每个训练实例的自适应加权。Pang 等人提出的深度森林置信度筛选机制,根据得到的随机森林类概率分布,严格去除训练和测试过程中的实例,以简化整个算法。这种严格的去除可能会导致深度森林级联的下一个级别的训练实例数量非常少。提出的修改更加灵活,并为实例分配权重,以便区分它们在深层森林级联的每个级别构建决策树的用途。它克服了置信度筛选机制的主要缺点。建议的修改在某种程度上类似于 AdaBoost 算法。数值实验说明了与原始深层森林相比,所提出的修改的优越性。还说明了如何扩展所提出的算法以解决迁移学习和距离度量学习问题。
更新日期:2020-05-31
down
wechat
bug