当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Kullback–Leibler Divergence Metric Learning
IEEE Transactions on Cybernetics ( IF 11.8 ) Pub Date : 2020-07-28 , DOI: 10.1109/tcyb.2020.3008248
Shuyi Ji 1 , Zizhao Zhang 1 , Shihui Ying 2 , Liejun Wang 3 , Xibin Zhao 1 , Yue Gao 1
Affiliation  

The Kullback–Leibler divergence (KLD), which is widely used to measure the similarity between two distributions, plays an important role in many applications. In this article, we address the KLD metric-learning task, which aims at learning the best KLD-type metric from the distributions of datasets. Concretely, first, we extend the conventional KLD by introducing a linear mapping and obtain the best KLD to well express the similarity of data distributions by optimizing such a linear mapping. It improves the expressivity of data distribution, which means it makes the distributions in the same class close and those in different classes far away. Then, the KLD metric learning is modeled by a minimization problem on the manifold of all positive-definite matrices. To deal with this optimization task, we develop an intrinsic steepest descent method, which preserves the manifold structure of the metric in the iteration. Finally, we apply the proposed method along with ten popular metric-learning approaches on the tasks of 3-D object classification and document classification. The experimental results illustrate that our proposed method outperforms all other methods.

中文翻译:

Kullback–Leibler 散度度量学习

Kullback-Leibler 散度 (KLD) 广泛用于衡量两个分布之间的相似性,在许多应用中发挥着重要作用。在本文中,我们讨论了 KLD 度量学习任务,该任务旨在从数据集的分布中学习最佳的 KLD 类型度量。具体来说,首先,我们通过引入线性映射来扩展传统的KLD,并通过优化这种线性映射来获得能够很好地表达数据分布相似性的最佳KLD。它提高了数据分布的表现力,这意味着它使同一类的分布接近而不同类的分布远离。然后,通过所有正定矩阵的流形上的最小化问题对 KLD 度量学习进行建模。为了处理这个优化任务,我们开发了一种内在的最速下降法,它在迭代中保留了度量的流形结构。最后,我们将所提出的方法与十种流行的度量学习方法一起应用于 3-D 对象分类和文档分类的任务。实验结果表明,我们提出的方法优于所有其他方法。
更新日期:2020-07-28
down
wechat
bug