当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
New Insights into Learning with Correntropy-Based Regression
Neural Computation ( IF 2.7 ) Pub Date : 2021-01-01 , DOI: 10.1162/neco_a_01334
Yunlong Feng 1
Affiliation  

Stemming from information-theoretic learning, the correntropy criterion and its applications to machine learning tasks have been extensively studied and explored. Its application to regression problems leads to the robustness-enhanced regression paradigm: correntropy-based regression. Having drawn a great variety of successful real-world applications, its theoretical properties have also been investigated recently in a series of studies from a statistical learning viewpoint. The resulting big picture is that correntropy-based regression regresses toward the conditional mode function or the conditional mean function robustly under certain conditions. Continuing this trend and going further, in this study, we report some new insights into this problem. First, we show that under the additive noise regression model, such a regression paradigm can be deduced from minimum distance estimation, implying that the resulting estimator is essentially a minimum distance estimator and thus possesses robustness properties. Second, we show that the regression paradigm in fact provides a unified approach to regression problems in that it approaches the conditional mean, the conditional mode, and the conditional median functions under certain conditions. Third, we present some new results when it is used to learn the conditional mean function by developing its error bounds and exponential convergence rates under conditional (1+ε)-moment assumptions. The saturation effect on the established convergence rates, which was observed under (1+ε)-moment assumptions, still occurs, indicating the inherent bias of the regression estimator. These novel insights deepen our understanding of correntropy-based regression, help cement the theoretic correntropy framework, and enable us to investigate learning schemes induced by general bounded nonconvex loss functions.

中文翻译:

基于相关熵的回归学习的新见解

源自信息论学习,相关熵准则及其在机器学习任务中的应用已被广泛研究和探索。它在回归问题上的应用导致了稳健性增强的回归范式:基于相关熵的回归。在吸引了大量成功的现实世界应用之后,最近也从统计学习的角度在一系列研究中研究了它的理论特性。由此产生的大图是基于相关熵的回归在某些条件下稳健地回归到条件模式函数或条件平均函数。继续这一趋势并进一步发展,在本研究中,我们报告了对这个问题的一些新见解。首先,我们证明在加性噪声回归模型下,这种回归范式可以从最小距离估计中推导出来,这意味着所得的估计量本质上是一个最小距离估计量,因此具有鲁棒性。其次,我们表明回归范式实际上为回归问题提供了统一的方法,因为它在某些条件下接近条件均值、条件模式和条件中值函数。第三,当它用于通过在条件 (1+ε) 矩假设下开发其误差界限和指数收敛率来学习条件均值函数时,我们提出了一些新结果。在 (1+ε) 矩假设下观察到的对既定收敛速率的饱和效应仍然发生,表明回归估计量的固有偏差。
更新日期:2021-01-01
down
wechat
bug