Abstract
Three new procedures for computation the scaling parameter in the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno search direction with a parameter are presented. The first two are based on clustering the eigenvalues of the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno iteration matrix with a parameter by using the determinant or the trace of this matrix. The third one is based on minimizing the measure function of Byrd and Nocedal (SIAM J Numer Anal 26:727–739, 1989). For all these three algorithms the sufficient descent condition is established. The stepsize is computed using the standard Wolfe line search. Under the standard Wolfe line search the global convergence of these algorithms is established. By using 80 unconstrained optimization test problems, with different structures and complexities, it is shown that the performances of the self-scaling memoryless algorithms based on the determinant or on the trace of the iteration matrix or on minimizing the measure function are better than those of CG_DESCENT (version 1.4) with Wolfe line search (Hager and Zhang in SIAM J Optim 16:170–192, 2005), the self-scaling memoryless BFGS algorithms with scaling parameter proposed by Oren and Spedicato (Math Program 10:70–90, 1976) and by Oren and Luenberger (Manag Sci 20:845–862, 1974), LBFGS by Liu and Nocedal (Math Program 45:503–528, 1989) and the standard BFGS. The self-scaling memoryless algorithm based on minimizing the measure function of Byrd and Nocedal is slightly top performer versus the same algorithms based on the determinant or on the trace of the iteration matrix.
Similar content being viewed by others
References
Al-Baali, M.: Numerical experience with a class of self-scaling quasi-Newton algorithms. J. Optim. Theory Appl. 96(3), 533–553 (1998)
Andrei, N.: Critica Raţiunii Algoritmilor de Optimizare fără Restricţii. [Criticism of the Unconstrained Optimization Algorithms Reasoning]. Editura Academiei Române, Bucureşti (2009)
Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213(2), 361–369 (2009)
Andrei, N.: A new three-term conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 68, 305–321 (2015)
Andrei, N.: An adaptive conjugate gradient algorithm for large-scale unconstrained optimization. J. Comput. Appl. Math. 292, 83–91 (2016)
Andrei, N.: Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization. Optim. Methods Softw. 32(3), 534–551 (2017)
Andrei, N.: UOP—a collection of 80 unconstrained optimization test problems. Technical Report No. 7/2018, November 17, Research Institute for Informatics, Bucharest, Romania. (2018)
Axelsson, O., Lindskog, G.: On the rate of convergence of the preconditioned conjugate gradient methods. Numer. Math. 48, 499–523 (1986)
Babaie-Kafaki, S.: On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)
Babaie-Kafaki, S.: A modified scaling parameter for the memoryless BFGS updating formula. Numer Algorithms 72(2), 425–433 (2016)
Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43(2), 117–128 (2001)
Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM TOMS 21, 123–160 (1995)
Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. I. General considerations. J. Inst. Math. Appl. 6, 76–90 (1970)
Byrd, R., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrainedminimization. SIAM J. Numer. Anal. 26, 727–739 (1989)
Dai, Y.H.: Chapter 8: convergence analysis of nonlinear conjugate gradient methods. In: Wang, Y., Yagola, A.G., Yang, C. (eds.) Optimization and Regularization for Computational Inverse Problems and Applications, pp. 157–181. Higher Education Press, Beijing and Springer, Berlin (2010)
Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)
Dai, Y.H., Liao, L.Z.: New conjugate conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)
Dennis, J.E., Wolkowicz, H.: Sizing and least-change secant methods. SIAM J. Numer. Anal. 30(5), 1291–1314 (1993)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)
Fletcher, R.: A new approach to variable metric algorithms. Comput. J. 13, 317–322 (1970)
Fletcher, R.: A new variational result for quasi-Newton formulae. SIAM J. Optim. 1, 18–21 (1991)
Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)
Goldfarb, D.: A family of variable metric method derived by variation mean. Math. Comput. 23, 23–26 (1970)
Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)
Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)
Hager, W.W., Zhang, H.: The limited memory conjugate gradient method. SIAM J. Optim. 23(4), 2150–2168 (2013)
Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)
Kaporin, I.E.: New convergence results and preconditioning strategies for the conjugate gradient methods. Numer. Linear Algebr. 1, 179–210 (1994)
Kratzer, D., Parter, S.V., Steuerwalt, M.: Block splittings for the conjugate gradient method. Comput. Fluid. 11, 255–279 (1983)
Liu, D.C., Nocedal, J.: On the limited-memory BFGS method for large optimization. Math. Program. 45, 503–528 (1989)
Luenberger, D.G.: Introduction to linear and nonlinear programming, 2nd edn. Addison-Wesley Publishing Company, Reading (1984)
Nocedal, J.: Updating quasi-Newton matrices with limited storage. Math. Comput. 35, 773–782 (1980)
Nocedal, J., Yuan, Y.X.: Analysis of self-scaling quasi-Newton method. Math. Program. 61, 19–37 (1993)
Oren, S.S., Spedicato, E.: Optimal conditioning of self-scaling variable metric algorithm. Math. Program. 10, 70–90 (1976)
Oren, S.S.: Self-scaling variable metric (SSVM) algorithms. Part II: implementation and experiments. Manag. Sci. 20(5), 863–874 (1974)
Oren, S.S., Luenberger, D.G.: Self-scaling variable metric (SSVM) algorithms, part I: criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 20, 845–862 (1974)
Perry, A.: A class of conjugate gradient algorithms with two step variable metric memory. Discussion paper 269, Center for Mathematical Studies in Economics and Management Science. Northwestern University, Il, USA. (1977)
Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Griffiths, D.F. (ed.) Numerical Analysis (Dundee, 1983), Lecture Notes in Mathematics, vol. 1066, pp. 122–141. (1984)
Shanno, D.F.: Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24, 647–656 (1970)
Shanno, D.F.: Conjugate gradient methods with inexact searches. Math. Oper. Res. 3, 244–256 (1978)
Winther, R.: Some superlinear convergence results for the conjugate gradient method. SIAM J. Numer. Anal. 17, 14–17 (1980)
Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969)
Wolfe, P.: Convergence conditions for ascent methods. II: some corrections. SIAM Rev. 13, 185–188 (1971)
Zoutendijk, G.: Nonlinear programming, computational methods. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 38–86. North-Holland, Amsterdam (1970)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Andrei, N. New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno method. Calcolo 57, 17 (2020). https://doi.org/10.1007/s10092-020-00365-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10092-020-00365-7