当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust Two-Dimensional Linear Discriminant Analysis via Information Divergence
Neural Processing Letters ( IF 2.6 ) Pub Date : 2020-10-03 , DOI: 10.1007/s11063-020-10359-9
Lei Zhang , Zhizheng Liang

Due to the complexities of collected data that may contain outliers and noises, many variants of LDA and 2DLDA have been proposed to address this problem and have shown that the improved methods produce much better performance than classical LDA and 2DLDA. In this paper we propose a novel two-dimensional linear discriminant analysis method via information divergence. The proposed method applies the weighted L21 norm to learn a robust projection matrix in the image space. In the proposed model, we introduce the weights into the within-class scatter and the total scatter simultaneously, and learn the weights by imposing information divergence on the objective functions. To handle the proposed model, we resort to Dinkelbach’s extended algorithm to solve the proposed ratio minimization problem. Considering the characteristics of the subproblems, we exploit an equivalent representation of subproblems which can be solved by alternating optimization techniques where each block of variables has good optimization properties. The proposed model not only overcomes the small-sample-size problem, but also suppresses outliers by an adaptively weighted scheme with the guidance of information divergences. The experiments on several image data sets demonstrate that the classification performance of the proposed method is superior to that of some existing methods in the presence of outliers.



中文翻译:

基于信息散度的鲁棒二维线性判别分析

由于可能包含异常值和噪声的收集数据的复杂性,已提出了LDA和2DLDA的许多变体来解决此问题,并显示出改进的方法比经典LDA和2DLDA产生了更好的性能。本文提出了一种基于信息散度的二维线性判别分析方法。所提出的方法应用加权L21范数来学习图像空间中的鲁棒投影矩阵。在提出的模型中,我们将权重同时引入到类内散点和总散点中,并通过在目标函数上施加信息差异来学习权重。为了处理提出的模型,我们求助于Dinkelbach的扩展算法来解决提出的比率最小化问题。考虑到子问题的特征,我们利用子问题的等效表示形式,可以通过交替的优化技术来解决该问题,其中每个变量块都具有良好的优化属性。所提出的模型不仅克服了样本量小的问题,而且在信息差异的指导下通过自适应加权方案抑制了异常值。在多个图像数据集上的实验表明,在存在离群值的情况下,该方法的分类性能优于某些现有方法。而且还能在信息差异的指导下通过自适应加权方案抑制异常值。在多个图像数据集上的实验表明,在存在离群值的情况下,该方法的分类性能优于某些现有方法。而且还能在信息差异的指导下通过自适应加权方案抑制异常值。在多个图像数据集上的实验表明,在存在离群值的情况下,该方法的分类性能优于某些现有方法。

更新日期:2020-10-04
down
wechat
bug