当前位置: X-MOL 学术Knowl. Based Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Oblique predictive clustering trees
Knowledge-Based Systems ( IF 8.8 ) Pub Date : 2021-06-12 , DOI: 10.1016/j.knosys.2021.107228
Tomaž Stepišnik , Dragi Kocev

Predictive clustering trees (PCTs) are a well-established generalization of standard decision trees, which can be used to solve a variety of predictive modeling tasks, including structured output prediction. Combining them into ensembles of PCTs yields state-of-the-art performance. However, they scale poorly to problems with high-dimensional output spaces and cannot exploit sparsity in data. Both of these issues are typically highlighted in (hierarchical) multi-label classification tasks, where the output can consist of hundreds of labels (high dimensionality), among which only a few are relevant for each example (sparsity). Sparsity is also often encountered in the input space (molecular fingerprints, bag-of-words representations, etc.). In this paper, we propose oblique predictive clustering trees capable of addressing these limitations. We design and implement two methods for learning oblique splits that contain linear combinations of features in the tests, hence a split corresponds to an arbitrary hyperplane in the input space. The resulting oblique trees are efficient for high-dimensional data and are capable of exploiting sparse data. We experimentally evaluate the proposed methods on 60 benchmark datasets for 6 predictive modeling tasks. The results of the experiments show that oblique predictive clustering trees achieve performance on par with state-of-the-art methods and are orders of magnitude faster than standard predictive clustering trees. We also show that meaningful feature importance scores can be extracted from the models learned with the proposed methods.



中文翻译:

斜预测聚类树

预测聚类树 (PCT) 是标准决策树的完善概括,可用于解决各种预测建模任务,包括结构化输出预测。将它们组合成 PCT 的集合可产生最先进的性能。然而,它们对高维输出空间的问题的扩展性很差,并且不能利用数据的稀疏性。这两个问题通常在(分层)多标签分类任务中突出显示,其中输出可以由数百个标签(高维)组成,其中只有少数与每个示例相关(稀疏)。在输入空间(分子指纹、词袋表示等)中也经常遇到稀疏性。在本文中,我们提出了能够解决这些限制的倾斜预测聚类树。我们设计并实现了两种方法来学习包含测试中特征线性组合的倾斜分割,因此分割对应于输入空间中的任意超平面。由此产生的斜树对于高维数据是有效的,并且能够利用稀疏数据。我们在 60 个基准数据集上对 6 个预测建模任务的建议方法进行了实验评估。实验结果表明,倾斜预测聚类树的性能与最先进的方法相当,并且比标准预测聚类树快几个数量级。我们还表明,可以从使用所提出的方法学习的模型中提取有意义的特征重要性分数。

更新日期:2021-06-22
down
wechat
bug