当前位置: X-MOL 学术Front. Comput. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Probabilistic robust regression with adaptive weights — a case study on face recognition
Frontiers of Computer Science ( IF 3.4 ) Pub Date : 2020-01-20 , DOI: 10.1007/s11704-019-9097-x
Jin Li , Quan Chen , Jingwen Leng , Weinan Zhang , Minyi Guo

Robust regression plays an important role in many machine learning problems. A primal approach relies on the use of Huber loss and an iteratively reweighted 2 method. However, because the Huber loss is not smooth and its corresponding distribution cannot be represented as a Gaussian scale mixture, such an approach is extremely difficult to handle using a probabilistic framework. To address those limitations, this paper proposes two novel losses and the corresponding probability functions. One is called Soft Huber, which is well suited for modeling non-Gaussian noise. Another is Nonconvex Huber, which can help produce much sparser results when imposed as a prior on regression vector. They can represent any ℓq loss (\({1 \over 2}\)q < 2) with tuning parameters, which makes the regression model more robust. We also show that both distributions have an elegant form, which is a Gaussian scale mixture with a generalized inverse Gaussian mixing density. This enables us to devise an expectation maximization (EM) algorithm for solving the regression model. We can obtain an adaptive weight through EM, which is very useful to remove noise data or irrelevant features in regression problems. We apply our model to the face recognition problem and show that it not only reduces the impact of noise pixels but also removes more irrelevant face images. Our experiments demonstrate the promising results on two datasets.

中文翻译:

自适应权重的概率鲁棒回归-以人脸识别为例

健壮的回归在许多机器学习问题中起着重要作用。一种原的方法依赖于使用休伯损失和迭代再加权2方法。但是,由于Huber损失不平滑,并且其对应的分布不能表示为高斯比例混合,因此使用概率框架很难处理这种方法。为了解决这些限制,本文提出了两个新颖的损失和相应的概率函数。一种叫做Soft Huber,非常适合对非高斯噪声建模。另一个是Nonconvex Huber,当作为先验值应用于回归向量时,可以帮助产生更稀疏的结果。他们可以代表任何ℓ q损失(\({1 \超过2} \)q <2)与调谐参数,这使得回归模型更加鲁棒。我们还表明,这两个分布都具有优雅的形式,这是具有广义逆高斯混合密度的高斯比例混合。这使我们能够设计出期望最大化(EM)算法来求解回归模型。我们可以通过EM获得自适应权重,这对于消除噪声数据或回归问题中不相关的特征非常有用。我们将模型应用于人脸识别问题,并表明该模型不仅可以减少噪点像素的影响,而且可以删除更多无关的人脸图像。我们的实验在两个数据集上展示了令人鼓舞的结果。
更新日期:2020-01-20
down
wechat
bug