当前位置: X-MOL 学术Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Low-Rank Discriminative Least Squares Regression for Image Classification
Signal Processing ( IF 3.4 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.sigpro.2020.107485
Zhe Chen , Xiao-Jun Wu , Josef Kittler

Latest least squares regression (LSR) methods mainly try to learn slack regression targets to replace strict zero-one labels. However, the difference of intra-class targets can also be highlighted when enlarging the distance between different classes, and roughly persuing relaxed targets may lead to the problem of overfitting. To solve above problems, we propose a low-rank discriminative least squares regression model (LRDLSR) for multi-class image classification. Specifically, LRDLSR class-wisely imposes low-rank constraint on the intra-class regression targets to encourage its compactness and similarity. Moreover, LRDLSR introduces an additional regularization term on the learned targets to avoid the problem of overfitting. These two improvements are helpful to learn a more discriminative projection for regression and thus achieving better classification performance. Experimental results over a range of image databases demonstrate the effectiveness of the proposed LRDLSR method.

中文翻译:

用于图像分类的低秩判别最小二乘回归

最新的最小二乘回归 (LSR) 方法主要尝试学习松弛回归目标来替换严格的零一标签。但是,在扩大不同类之间的距离时,类内目标的差异也会突出,粗略地追求放松的目标可能会导致过拟合的问题。为了解决上述问题,我们提出了一种用于多类图像分类的低秩判别最小二乘回归模型(LRDLSR)。具体来说,LRDLSR 明智地对类内回归目标施加低秩约束,以鼓励其紧凑性和相似性。此外,LRDLSR 在学习目标上引入了一个额外的正则化项,以避免过拟合问题。这两项改进有助于为回归学习更具辨别力的投影,从而实现更好的分类性能。在一系列图像数据库上的实验结果证明了所提出的 LRDLSR 方法的有效性。
更新日期:2020-08-01
down
wechat
bug