当前位置: X-MOL 学术Comput. Appl. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A double parameter self-scaling memoryless BFGS method for unconstrained optimization
Computational and Applied Mathematics ( IF 2.5 ) Pub Date : 2020-06-02 , DOI: 10.1007/s40314-020-01157-z
Neculai Andrei

A double parameter self-scaling memoryless BFGS method for unconstrained optimization is presented. In this method, the first two terms of the self-scaling memoryless BFGS matrix are scaled with a positive parameter, while the third one is scaled with another positive parameter. The first parameter scaling the first two terms is determined to cluster the eigenvalues of the memoryless BFGS matrix. The second parameter scaling the third term is computed as a preconditioner to the Hessian of the minimizing function combined with the minimization of the conjugacy condition to shift the large eigenvalues of the self-scaling memoryless BFGS matrix to the left. The stepsize is determined by the Wolfe line search conditions. The global convergence of this method is proved, assuming that the minimizing function is uniformly convex. The preliminary computational experiments on a set of 80 unconstrained optimization test functions show that this algorithm is more efficient and more robust than the self-scaling BFGS updates by Oren and Luenberger and by Oren and Spedicato. Subject to the CPU time metric, CG-DESCENT is top performer. Comparisons with L-BFGS show that our algorithm is more efficient.



中文翻译:

无约束优化的双参数自缩放无记忆BFGS方法

提出了一种无约束优化的双参数自缩放无记忆BFGS方法。在这种方法中,无标度无记忆BFGS矩阵的前两项用正参数标度,而第三个项用另一个正参数标度。确定缩放前两个项的第一参数以对无记忆BFGS矩阵的特征值进行聚类。将缩放第三项的第二个参数作为最小函数的Hessian的前提条件,并结合最小化共轭条件,以将自缩放无记忆BFGS矩阵的大特征值向左移动。步长由Wolfe线搜索条件确定。假设最小化函数一致凸,证明了该方法的全局收敛性。在一组80个无约束的优化测试函数上进行的初步计算实验表明,该算法比Oren和Luenberger以及Oren和Spedicato进行的自缩放BFGS更新更有效,更鲁棒。根据CPU时间指标,CG-DESCENT表现最佳。与L-BFGS的比较表明,我们的算法效率更高。

更新日期:2020-06-02
down
wechat
bug