当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Low Rank Regularization: A review
Neural Networks ( IF 7.8 ) Pub Date : 2020-10-31 , DOI: 10.1016/j.neunet.2020.09.021
Zhanxuan Hu , Feiping Nie , Rong Wang , Xuelong Li

Low Rank Regularization (LRR), in essence, involves introducing a low rank or approximately low rank assumption to target we aim to learn, which has achieved great success in many data analysis tasks. Over the last decade, much progress has been made in theories and applications. Nevertheless, the intersection between these two lines is rare. In order to construct a bridge between practical applications and theoretical studies, in this paper we provide a comprehensive survey for LRR. Specifically, we first review the recent advances in two issues that all LRR models are faced with: (1) rank-norm relaxation, which seeks to find a relaxation to replace the rank minimization problem; (2) model optimization, which seeks to use an efficient optimization algorithm to solve the relaxed LRR models. For the first issue, we provide a detailed summarization for various relaxation functions and conclude that the non-convex relaxations can alleviate the punishment bias problem compared with the convex relaxations. For the second issue, we summary the representative optimization algorithms used in previous studies, and analysis their advantages and disadvantages. As the main goal of this paper is to promote the application of non-convex relaxations, we conduct extensive experiments to compare different relaxation functions. The experimental results demonstrate that the non-convex relaxations generally provide a large advantage over the convex relaxations. Such a result is inspiring for further improving the performance of existing LRR models.



中文翻译:

低等级正则化:评论

从本质上讲,低秩正则化(LRR)涉及针对我们要学习的目标引入低秩或近似低秩假设,这在许多数据分析任务中都取得了巨大的成功。在过去的十年中,理论和应用取得了很大的进步。但是,这两条线之间的交点很少。为了在实际应用和理论研究之间架起一座桥梁,在本文中,我们对LRR进行了全面的调查。具体来说,我们首先回顾一下所有LRR模型都面临的两个问题的最新进展:1个 秩-范式松弛,试图找到一种松弛来代替秩最小化问题; 2模型优化,旨在使用高效的优化算法来求解松弛的LRR模型。对于第一个问题,我们提供了各种松弛函数的详细概述,并得出结论,与凸松弛相比,非凸松弛可以减轻惩罚偏差问题。对于第二个问题,我们总结了先前研究中使用的代表性优化算法,并分析了它们的优缺点。由于本文的主要目标是促进非凸松弛的应用,我们进行了广泛的实验以比较不同的松弛函数。实验结果表明,非凸松弛通常比凸松弛具有更大的优势。

更新日期:2020-11-02
down
wechat
bug