当前位置: X-MOL 学术Cognit. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AEKOC+: Kernel Ridge Regression-Based Auto-Encoder for One-Class Classification Using Privileged Information
Cognitive Computation ( IF 4.3 ) Pub Date : 2020-01-06 , DOI: 10.1007/s12559-019-09705-4
Chandan Gautam , Aruna Tiwari , M. Tanveer

In recent years, non-iterative learning approaches for kernel have received quite an attention by researchers and kernel ridge regression (KRR) approach is one of them. Recently, KRR-based Auto-Encoder is developed for the one-class classification (OCC) task and named as AEKOC. OCC is generally used for outlier or novelty detection. The brain can detect outlier just by learning from only normal samples. Similarly, OCC also uses only normal samples to train the model, and trained model can be used for outlier detection. In this paper, AEKOC is enabled to utilize privileged information, which is generally ignored by AEKOC or any traditional machine learning technique but usually present in human learning. For this purpose, we have combined learning using privileged information (LUPI) framework with AEKOC, and proposed a classifier, which is referred to as AEKOC+. Privileged information is only available during training but not during testing. Therefore, AEKOC is unable to utilize this information for building the model. However, AEKOC+ can efficiently handle the privileged information due to the inclusion of the LUPI framework with AEKOC. Experiments have been conducted on MNIST dataset and on various other datasets from UCI machine learning repository, which demonstrates the superiority of AEKOC+ over AEKOC. Our formulation shows that AEKOC does not utilize the privileged features in learning; however, formulation of AEKOC+ helps it in learning from the privileged features differently from other available features and improved generalization performance of AEKOC. Moreover, AEKOC+ also outperformed two LUPI framework–based one-class classifiers (i.e., OCSVM+ and SSVDD+).

中文翻译:

AEKOC +:使用特权信息的基于一类分类的基于核岭回归的自动编码器

近年来,针对内核的非迭代学习方法受到了研究人员的广泛关注,而内核岭回归(KRR)方法就是其中之一。最近,针对一类分类(OCC)任务开发了基于KRR的自动编码器,并将其命名为AEKOC。OCC通常用于异常值或新颖性检测。仅从正常样本中学习,大脑就能检测到异常值。同样,OCC还仅使用正常样本来训练模型,而训练后的模型可以用于异常值检测。在本文中,启用了AEKOC来利用特权信息,这些特权信息通常被AEKOC或任何传统的机器学习技术所忽略,但通常存在于人类学习中。为此,我们将使用特权信息(LUPI)框架的学习与AEKOC相结合,并提出了分类器,称为AEKOC +。特权信息仅在培训期间可用,而在测试期间不可用。因此,AEKOC无法利用此信息来构建模型。但是,由于LUEK框架已包含在AEKOC中,因此AEKOC +可以有效地处理特权信息。在MNIST数据集和UCI机器学习存储库中的各种其他数据集上进行了实验,证明了AEKOC +优于AEKOC。我们的表述表明,AEKOC在学习中没有利用特权功能。但是,制定AEKOC +可以帮助其从特权功能中学习,而不同于其他可用功能,并且可以提高AEKOC的泛化性能。此外,AEKOC +还优于两个基于LUPI框架的一类分类器(即OCSVM +和SSVDD +)。
更新日期:2020-01-06
down
wechat
bug