当前位置: X-MOL 学术J. Approx. Theory › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Kernel gradient descent algorithm for information theoretic learning
Journal of Approximation Theory ( IF 0.9 ) Pub Date : 2020-12-29 , DOI: 10.1016/j.jat.2020.105518
Ting Hu , Qiang Wu , Ding-Xuan Zhou

Information theoretic learning is a learning paradigm that uses concepts of entropies and divergences from information theory. A variety of signal processing and machine learning methods fall into this framework. Minimum error entropy principle is a typical one amongst them. In this paper, we study a kernel version of minimum error entropy methods that can be used to find nonlinear structures in the data. We show that the kernel minimum error entropy can be implemented by kernel based gradient descent algorithms with or without regularization. Convergence rates for both algorithms are deduced.



中文翻译:

信息理论学习的核梯度下降算法

信息理论学习是一种学习范式,它使用信息论的熵和散度概念。各种信号处理和机器学习方法都属于此框架。最小误差熵原理是其中的典型代表。在本文中,我们研究了最小误差熵方法的内核版本,该方法可用于查找数据中的非线性结构。我们表明,可以通过具有或不具有正则化的基于内核的梯度下降算法来实现内核最小误差熵。推导两种算法的收敛速度。

更新日期:2021-01-07
down
wechat
bug