当前位置: X-MOL 学术Numer. Algor. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Newton method for ℓ 0 -regularized optimization
Numerical Algorithms ( IF 2.1 ) Pub Date : 2021-03-24 , DOI: 10.1007/s11075-021-01085-x
Shenglong Zhou , Lili Pan , Naihua Xiu

As a tractable approach, regularization is frequently adopted in sparse optimization. This gives rise to regularized optimization, which aims to minimize the 0 norm or its continuous surrogates that characterize the sparsity. From the continuity of surrogates to the discreteness of the 0 norm, the most challenging model is the 0-regularized optimization. There is an impressive body of work on the development of numerical algorithms to overcome this challenge. However, most of the developed methods only ensure that either the (sub)sequence converges to a stationary point from the deterministic optimization perspective or that the distance between each iteration and any given sparse reference point is bounded by an error bound in the sense of probability. In this paper, we develop a Newton-type method for the 0-regularized optimization and prove that the generated sequence converges to a stationary point globally and quadratically under the standard assumptions, theoretically explaining that our method can perform surprisingly well.



中文翻译:

ton 0-正规优化的牛顿法

作为一种易于处理的方法,在稀疏优化中经常采用正则化。这引起了正规化的优化,其目的是尽量减少0规范或表征稀疏的连续代理人。从代理人到的离散的连续性0规范,最具挑战性的模式是0-规范化的优化。克服这一挑战的数值算法开发工作令人印象深刻。但是,大多数已开发的方法仅确保从确定性优化的角度出发,(子)序列收敛到固定点,或者确保每次迭代与任何给定的稀疏参考点之间的距离都受概率意义上的误差范围的限制。 。在本文中,我们开发了用于ton 0正规优化的牛顿型方法,并证明了在标准假设下生成的序列全局和二次收敛到固定点,从理论上解释了我们的方法性能出奇地好。

更新日期:2021-03-24
down
wechat
bug