当前位置: X-MOL 学术Appl. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Non-convex approximation based l 0 -norm multiple indefinite kernel feature selection
Applied Intelligence ( IF 5.3 ) Pub Date : 2019-07-16 , DOI: 10.1007/s10489-018-01407-y
Hui Xue , Yu Song

Multiple kernel learning (MKL) for feature selection utilizes kernels to explore complex properties of features, which has been shown to be among the most effective for feature selection. To perform feature selection, a natural way is to use the l0-norm to get sparse solutions. However, the optimization problem involving l0-norm is NP-hard. Therefore, previous MKL methods typically utilize a l1-norm to get sparse kernel combinations. However, the l1-norm, as a convex approximation of l0-norm, sometimes cannot attain the desired solution of the l0-norm regularizer problem and may lead to prediction accuracy loss. In contrast, various non-convex approximations of l0-norm have been proposed and perform better in many linear feature selection methods. In this paper, we propose a novel l0-norm based MKL method (l0-MKL) for feature selection with non-convex approximations constraint on kernel combination coefficients to select features automatically. Considering the better empirical performance of indefinite kernels than positive kernels, our l0-MKL is built on the primal form of multiple indefinite kernel learning for feature selection. The non-convex optimization problem of l0-MKL is further refumated as a difference of convex functions (DC) programming and solved by DC algorithm (DCA). Experiments on real-world datasets demonstrate that l0-MKL is superior to some related state-of-the-art methods in both feature selection and classification performance.

中文翻译:

基于非凸逼近的l 0-范数多重不定核特征选择

用于特征选择的多核学习(MKL)利用核来探索特征的复杂属性,这已被证明是最有效的特征选择之一。要执行特征选择,自然的方法是使用l 0-范数来获得稀疏解。但是,涉及l 0-范数的优化问题是NP-困难的。因此,先前的MKL方法通常利用l 1-范数来获得稀疏内核组合。然而,l 1-范数作为l 0-范数的凸近似,有时无法获得l 0-的理想解。-范数正则化问题,并可能导致预测精度损失。相反,已经提出了l 0-范数的各种非凸近似,并且在许多线性特征选择方法中表现更好。在本文中,我们提出了一种新的基于l 0范数的MKL方法(l 0 -MKL)用于特征选择,对内核组合系数具有非凸近似约束,可以自动选择特征。考虑到不确定核的经验性能要比正核更好,我们的l 0 -MKL建立在多重不确定核学习的原始形式上,用于特征选择。l 0的非凸优化问题-MKL被进一步驳斥为凸函数(DC)编程的差异,并通过DC算法(DCA)解决。对真实数据集的实验表明,l 0 -MKL在特征选择和分类性能方面均优于某些相关的最新技术。
更新日期:2020-01-04
down
wechat
bug