当前位置: X-MOL 学术J. Sign. Process. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Improved Boundary Uncertainty-Based Estimation for Classifier Evaluation
Journal of Signal Processing Systems ( IF 1.8 ) Pub Date : 2021-06-10 , DOI: 10.1007/s11265-021-01671-1
David Ha , Shigeru Katagiri , Hideyuki Watanabe , Miho Ohsaki

This paper proposes a new boundary uncertainty-based estimation method that has significantly higher accuracy, scalability, and applicability than our previously proposed boundary uncertainty estimation method. In our previous work, we introduced a new classifier evaluation metric that we termed “boundary uncertainty.” The name “boundary uncertainty” comes from evaluating the classifier based solely on measuring the equality between class posterior probabilities along the classifier boundary; satisfaction of such equality can be described as “uncertainty” along the classifier boundary. We also introduced a method to estimate this new evaluation metric. By focusing solely on the classifier boundary to evaluate its uncertainty, boundary uncertainty defines an easier estimation target that can be accurately estimated based directly on a finite training set without using a validation set. Regardless of the dataset, boundary uncertainty is defined between 0 and 1, where 1 indicates whether probability estimation for the Bayes error is achieved. We call our previous boundary uncertainty estimation method “Proposal 1” in order to contrast it with the new method introduced in this paper, which we call “Proposal 2.” Using Proposal 1, we performed successful classifier evaluation on real-world data and supported it with theoretical analysis. However, Proposal 1 suffered from accuracy, scalability, and applicability limitations owing to the difficulty of finding the location of a classifier boundary in a multidimensional sample space. The novelty of Proposal 2 is that it locally reformalizes boundary uncertainty in a single dimension that focuses on the classifier boundary. This convenient reduction with a focus toward the classifier boundary provides the new method’s significant improvements. In classifier evaluation experiments on Support Vector Machines (SVM) and MultiLayer Perceptron (MLP), we demonstrate that Proposal 2 offers a competitive classifier evaluation accuracy compared to a benchmark Cross Validation (CV) method as well as much higher scalability than both CV and Proposal 1.



中文翻译:

用于分类器评估的改进的基于边界不确定性的估计

本文提出了一种新的基于边界不确定性的估计方法,它比我们之前提出的边界不确定性估计方法具有更高的准确性、可扩展性和适用性。在我们之前的工作中,我们引入了一种新的分类器评估指标,我们称之为“边界不确定性”。“边界不确定性”这个名字来自于仅仅基于测量沿着分类器边界的类后验概率之间的相等性来评估分类器;这种等式的满足可以被描述为分类器边界上的“不确定性”。我们还介绍了一种估计这个新评估指标的方法。通过仅关注分类器边界来评估其不确定性,边界不确定性定义了一个更简单的估计目标,可以直接基于有限的训练集而不使用验证集来准确估计。无论数据集如何,边界不确定性定义在 0 和 1 之间,其中 1 表示是否实现了贝叶斯误差的概率估计。我们将之前的边界不确定性估计方法称为“提案 1”,以便将其与本文中介绍的新方法(我们称为“提案 2”)进行对比。使用建议 1,我们对现实世界的数据进行了成功的分类器评估,并通过理论分析对其进行了支持。然而,由于难以在多维样本空间中找到分类器边界的位置,提案 1 受到准确性、可扩展性和适用性的限制。提案 2 的新颖之处在于它在关注分类器边界的单一维度中局部地重新定义了边界不确定性。这种以分类器边界为重点的方便减少提供了新方法的显着改进。在支持向量机 (SVM) 和多层感知器 (MLP) 的分类器评估实验中,我们证明与基准交叉验证 (CV) 方法相比,Proposal 2 提供了具有竞争力的分类器评估精度,以及比 CV 和 Proposal 更高的可扩展性1.

更新日期:2021-06-10
down
wechat
bug