当前位置: X-MOL 学术Calcolo › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno method
Calcolo ( IF 1.7 ) Pub Date : 2020-05-18 , DOI: 10.1007/s10092-020-00365-7
Neculai Andrei

Three new procedures for computation the scaling parameter in the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno search direction with a parameter are presented. The first two are based on clustering the eigenvalues of the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno iteration matrix with a parameter by using the determinant or the trace of this matrix. The third one is based on minimizing the measure function of Byrd and Nocedal (SIAM J Numer Anal 26:727–739, 1989). For all these three algorithms the sufficient descent condition is established. The stepsize is computed using the standard Wolfe line search. Under the standard Wolfe line search the global convergence of these algorithms is established. By using 80 unconstrained optimization test problems, with different structures and complexities, it is shown that the performances of the self-scaling memoryless algorithms based on the determinant or on the trace of the iteration matrix or on minimizing the measure function are better than those of CG_DESCENT (version 1.4) with Wolfe line search (Hager and Zhang in SIAM J Optim 16:170–192, 2005), the self-scaling memoryless BFGS algorithms with scaling parameter proposed by Oren and Spedicato (Math Program 10:70–90, 1976) and by Oren and Luenberger (Manag Sci 20:845–862, 1974), LBFGS by Liu and Nocedal (Math Program 45:503–528, 1989) and the standard BFGS. The self-scaling memoryless algorithm based on minimizing the measure function of Byrd and Nocedal is slightly top performer versus the same algorithms based on the determinant or on the trace of the iteration matrix.

中文翻译:

基于自定标无记忆Broyden-Fletcher-Goldfarb-Shanno方法的新共轭梯度算法

提出了三种新的用于计算自缩放无内存Broyden-Fletcher-Goldfarb-Shanno搜索方向上带有参数的缩放参数的过程。前两个是基于自定标的无内存Broyden-Fletcher-Goldfarb-Shanno迭代矩阵的特征值,并使用该矩阵的行列式或迹线对其进行聚类。第三个是基于最小化Byrd和Nocedal的度量函数(SIAM J Numer Anal 26:727–739,1989)。对于所有这三种算法,都建立了足够的下降条件。使用标准的Wolfe线搜索来计算步长。在标准的Wolfe线搜索下,建立了这些算法的全局收敛性。通过使用80种无约束的优化测试问题,这些问题具有不同的结构和复杂度,结果表明,基于行列式或迭代矩阵轨迹或最小化度量函数的自缩放无记忆算法的性能优于沃尔夫线搜索的CG_DESCENT(1.4版)(Hager和Zhang SIAM J Optim 16:170–192,2005),具有缩放参数的自缩放无内存BFGS算法,由Oren和Spedicato(Math Program 10:70–90,1976)和Oren and Luenberger(Manag Sci 20:845– 862,1974),Liu和Nocedal的LBFGS(数学计划45:503-528,1989)和标准BFGS。与基于行列式或迭代矩阵轨迹的相同算法相比,基于最小化Byrd和Nocedal的度量函数的自缩放无内存算法在性能上略有提高。
更新日期:2020-05-18
down
wechat
bug