Skip to main content
Log in

New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno method

  • Published:
Calcolo Aims and scope Submit manuscript

Abstract

Three new procedures for computation the scaling parameter in the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno search direction with a parameter are presented. The first two are based on clustering the eigenvalues of the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno iteration matrix with a parameter by using the determinant or the trace of this matrix. The third one is based on minimizing the measure function of Byrd and Nocedal (SIAM J Numer Anal 26:727–739, 1989). For all these three algorithms the sufficient descent condition is established. The stepsize is computed using the standard Wolfe line search. Under the standard Wolfe line search the global convergence of these algorithms is established. By using 80 unconstrained optimization test problems, with different structures and complexities, it is shown that the performances of the self-scaling memoryless algorithms based on the determinant or on the trace of the iteration matrix or on minimizing the measure function are better than those of CG_DESCENT (version 1.4) with Wolfe line search (Hager and Zhang in SIAM J Optim 16:170–192, 2005), the self-scaling memoryless BFGS algorithms with scaling parameter proposed by Oren and Spedicato (Math Program 10:70–90, 1976) and by Oren and Luenberger (Manag Sci 20:845–862, 1974), LBFGS by Liu and Nocedal (Math Program 45:503–528, 1989) and the standard BFGS. The self-scaling memoryless algorithm based on minimizing the measure function of Byrd and Nocedal is slightly top performer versus the same algorithms based on the determinant or on the trace of the iteration matrix.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Al-Baali, M.: Numerical experience with a class of self-scaling quasi-Newton algorithms. J. Optim. Theory Appl. 96(3), 533–553 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  2. Andrei, N.: Critica Raţiunii Algoritmilor de Optimizare fără Restricţii. [Criticism of the Unconstrained Optimization Algorithms Reasoning]. Editura Academiei Române, Bucureşti (2009)

  3. Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213(2), 361–369 (2009)

    MathSciNet  MATH  Google Scholar 

  4. Andrei, N.: A new three-term conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 68, 305–321 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Andrei, N.: An adaptive conjugate gradient algorithm for large-scale unconstrained optimization. J. Comput. Appl. Math. 292, 83–91 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Andrei, N.: Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization. Optim. Methods Softw. 32(3), 534–551 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  7. Andrei, N.: UOP—a collection of 80 unconstrained optimization test problems. Technical Report No. 7/2018, November 17, Research Institute for Informatics, Bucharest, Romania. (2018)

  8. Axelsson, O., Lindskog, G.: On the rate of convergence of the preconditioned conjugate gradient methods. Numer. Math. 48, 499–523 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  9. Babaie-Kafaki, S.: On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  10. Babaie-Kafaki, S.: A modified scaling parameter for the memoryless BFGS updating formula. Numer Algorithms 72(2), 425–433 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43(2), 117–128 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM TOMS 21, 123–160 (1995)

    Article  MATH  Google Scholar 

  13. Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. I. General considerations. J. Inst. Math. Appl. 6, 76–90 (1970)

    Article  MATH  Google Scholar 

  14. Byrd, R., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrainedminimization. SIAM J. Numer. Anal. 26, 727–739 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  15. Dai, Y.H.: Chapter 8: convergence analysis of nonlinear conjugate gradient methods. In: Wang, Y., Yagola, A.G., Yang, C. (eds.) Optimization and Regularization for Computational Inverse Problems and Applications, pp. 157–181. Higher Education Press, Beijing and Springer, Berlin (2010)

    Chapter  Google Scholar 

  16. Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Dai, Y.H., Liao, L.Z.: New conjugate conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  18. Dennis, J.E., Wolkowicz, H.: Sizing and least-change secant methods. SIAM J. Numer. Anal. 30(5), 1291–1314 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  19. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  20. Fletcher, R.: A new approach to variable metric algorithms. Comput. J. 13, 317–322 (1970)

    Article  MATH  Google Scholar 

  21. Fletcher, R.: A new variational result for quasi-Newton formulae. SIAM J. Optim. 1, 18–21 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  22. Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  23. Goldfarb, D.: A family of variable metric method derived by variation mean. Math. Comput. 23, 23–26 (1970)

    Article  MATH  Google Scholar 

  24. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)

    Article  MATH  Google Scholar 

  26. Hager, W.W., Zhang, H.: The limited memory conjugate gradient method. SIAM J. Optim. 23(4), 2150–2168 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kaporin, I.E.: New convergence results and preconditioning strategies for the conjugate gradient methods. Numer. Linear Algebr. 1, 179–210 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kratzer, D., Parter, S.V., Steuerwalt, M.: Block splittings for the conjugate gradient method. Comput. Fluid. 11, 255–279 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  30. Liu, D.C., Nocedal, J.: On the limited-memory BFGS method for large optimization. Math. Program. 45, 503–528 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  31. Luenberger, D.G.: Introduction to linear and nonlinear programming, 2nd edn. Addison-Wesley Publishing Company, Reading (1984)

    MATH  Google Scholar 

  32. Nocedal, J.: Updating quasi-Newton matrices with limited storage. Math. Comput. 35, 773–782 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  33. Nocedal, J., Yuan, Y.X.: Analysis of self-scaling quasi-Newton method. Math. Program. 61, 19–37 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  34. Oren, S.S., Spedicato, E.: Optimal conditioning of self-scaling variable metric algorithm. Math. Program. 10, 70–90 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  35. Oren, S.S.: Self-scaling variable metric (SSVM) algorithms. Part II: implementation and experiments. Manag. Sci. 20(5), 863–874 (1974)

    Article  MATH  Google Scholar 

  36. Oren, S.S., Luenberger, D.G.: Self-scaling variable metric (SSVM) algorithms, part I: criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 20, 845–862 (1974)

    Article  MATH  Google Scholar 

  37. Perry, A.: A class of conjugate gradient algorithms with two step variable metric memory. Discussion paper 269, Center for Mathematical Studies in Economics and Management Science. Northwestern University, Il, USA. (1977)

  38. Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Griffiths, D.F. (ed.) Numerical Analysis (Dundee, 1983), Lecture Notes in Mathematics, vol. 1066, pp. 122–141. (1984)

    Google Scholar 

  39. Shanno, D.F.: Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24, 647–656 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  40. Shanno, D.F.: Conjugate gradient methods with inexact searches. Math. Oper. Res. 3, 244–256 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  41. Winther, R.: Some superlinear convergence results for the conjugate gradient method. SIAM J. Numer. Anal. 17, 14–17 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  42. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  43. Wolfe, P.: Convergence conditions for ascent methods. II: some corrections. SIAM Rev. 13, 185–188 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  44. Zoutendijk, G.: Nonlinear programming, computational methods. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 38–86. North-Holland, Amsterdam (1970)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neculai Andrei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andrei, N. New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno method. Calcolo 57, 17 (2020). https://doi.org/10.1007/s10092-020-00365-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10092-020-00365-7

Keywords

Mathematics Subject Classification

Navigation