当前位置: X-MOL 学术Knowl. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Simple supervised dissimilarity measure: Bolstering iForest-induced similarity with class information without learning
Knowledge and Information Systems ( IF 2.7 ) Pub Date : 2020-03-26 , DOI: 10.1007/s10115-020-01454-3
Jonathan R. Wells , Sunil Aryal , Kai Ming Ting

Existing distance metric learning methods require optimisation to learn a feature space to transform data—this makes them computationally expensive in large datasets. In classification tasks, they make use of class information to learn an appropriate feature space. In this paper, we present a simple supervised dissimilarity measure which does not require learning or optimisation. It uses class information to measure dissimilarity of two data instances in the input space directly. It is a supervised version of an existing data-dependent dissimilarity measure called \(m_\mathrm{e}\). Our empirical results in k-NN and LVQ classification tasks show that the proposed simple supervised dissimilarity measure generally produces predictive accuracy better than or at least as good as existing state-of-the-art supervised and unsupervised dissimilarity measures.

中文翻译:

简单的监督差异度度量:无需学习就可以利用类信息增强iForest诱导的相似度

现有的距离度量学习方法需要优化以学习特征空间来转换数据,这使得它们在大型数据集中的计算量很大。在分类任务中,他们利用类信息来学习适当的特征空间。在本文中,我们提出了一种无需监督或优化的简单监督差异度量。它使用类信息直接测量输入空间中两个数据实例的相异性。它是现有数据依赖差异度量(\\ m_ \ mathrm {e} \)的监督版本。我们在k中的经验结果-NN和LVQ分类任务表明,所提出的简单监督差异性度量通常产生的预测准确性优于或至少优于现有的最新监督和非监督差异性度量。
更新日期:2020-03-26
down
wechat
bug