Skip to main content
Log in

Dynamic search trajectory methods for global optimization

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

A detailed review of the dynamic search trajectory methods for global optimization is given. In addition, a family of dynamic search trajectories methods that are created using numerical methods for solving autonomous ordinary differential equations is presented. Furthermore, a strategy for developing globally convergent methods that is applicable to the proposed family of methods is given and the corresponding theorem is proved. Finally, theoretical results for obtaining nonmonotone convergent methods that exploit the accumulated information with regard to the most recent values of the objective function are given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Adam, S.P., Alexandropoulos, S.-A.N., Pardalos, P.M., Vrahatis, M.N.: No free lunch theorem: A review. Approximation and optimization. In: Demetriou, I.C., Pardalos, P.M. (eds.) Springer Optimization and Its Applications, vol. 145, pp 57–82. Springer International Publishing AG, Cham (2019)

    Google Scholar 

  2. Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pac. J. Math. 16(1), 1–3 (1966)

    MathSciNet  MATH  Google Scholar 

  3. Askarzadeh, A.: A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 169, 1–12 (2016)

    Google Scholar 

  4. Back, T.: Evolutionary Algorithms in Theory and Practice: Evolution Strategies,Evolutionary Programming, Genetic Algorithms. Oxford University Press (1996)

  5. Barbieri, C., Cocco, S., Monasson, R.: On the trajectories and performance of infotaxis, an information-based greedy search algorithm. EPL (Europhysics Letters) 94(2), 20005–p1–p6 (2011)

    Google Scholar 

  6. Battiti, R.: First-and second-order methods for learning: Between steepest descent and Newton’s method. Neur. Comput. 4(2), 141–166 (1992)

    Google Scholar 

  7. Bhaya, A., Kaszkurewicz, E.: Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method. Neural Netw. 17(1), 65–71 (2004)

    MATH  Google Scholar 

  8. Bhaya, A., Kaszkurewicz, E.: Control Perspectives on Numerical Algorithms and Matrix Problems, vol. 10. SIAM (2006)

  9. Bhaya, A., Pazos, F., Kaszkurewicz, E.: The controlled conjugate gradient type trajectory-following neural net for minimization of nonconvex functions. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp 1–8. IEEE (2010)

  10. Boggs, P.T.: An algorithm, based on singular perturbation theory, for ill-conditioned minimization problems. SIAM J. Numer. Anal. 14(5), 830–843 (1977)

    MathSciNet  MATH  Google Scholar 

  11. Branin, F.H.: Widely convergent method for finding multiple solutions of simultaneous nonlinear equations. IBM J. Res. Dev. 16(5), 504–522 (1972)

    MathSciNet  MATH  Google Scholar 

  12. Butcher, J.C.: The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods. Wiley-Interscience (1987)

  13. Butcher, J.C.: Numerical Analysis of Ordinary Differential Equations, 2nd edn. Wiley (2008)

  14. Cesari, L.: Optimization Theory and Applications: Problems with Ordinary Differential Equations, vol. 17. Springer Science & Business Media (2012)

  15. Dennis, J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Classics in Applied Mathematics. SIAM (1996)

  16. Dixon, L.C.W.: Neural networks and unconstrained optimization. In: Algorithms for Continuous Optimization, pp 513–530. Springer (1994)

  17. Dixon, L.C.W., Szego, G.P.: The global optimization problem: An introduction. Toward Global Optim. 2, 1–15 (1978)

    Google Scholar 

  18. D’yakonov, E.G.: Optimization in Solving Elliptic Problems. CRC Press (2018)

  19. Farkas, J., Jármai, K.: Design and Optimization of Metal Structures. Elsevier (2008)

  20. Farkas, J., Jármai, K., Snyman, J.A.: Global minimum cost design of a welded square stiffened plate supported at four corners. Struct. Multidiscip. Optim. 40 (1–6), 477 (2010)

    MATH  Google Scholar 

  21. Fiacco, A.V., McCormick, G.P.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Classics in Applied Mathematics. SIAM (1990)

  22. Fletcher, R.: Fortran subroutines for minimization by quasi-Newton methods. Technical Report AERE-R–7125. Atomic Energy Research Establishment Harwell, England (1972)

  23. Fletcher, R.: Practical Methods of Optimization. Wiley (2013)

  24. Floudas, C.A., Pardalos, P.M.: A Collection of Test Problems for Constrained Global Optimization Algorithms, pp. 455. Springer Science & Business Media (1990)

  25. Gill, P.E., Murray, W.: Quasi-Newton methods for unconstrained optimization. IMA J. Appl. Math. 9(1), 91–108 (1972)

    MathSciNet  MATH  Google Scholar 

  26. Griewank, A.O.: A Generalized Descent Method for Global Optimization. Master Thesis The Australian National University (1977)

  27. Griewank, A.O.: Generalized descent for global optimization. J. Optim. Theory Appl. 34(1), 11–39 (1981)

    MathSciNet  MATH  Google Scholar 

  28. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23(4), 707–716 (1986)

    MathSciNet  MATH  Google Scholar 

  29. Groenwold, A.A., Snyman, J.A.: Global optimization using dynamic search trajectories. Combinatorial and global optimization. In: Pardalos, P.M., Migdalas, A., Burkard, R. (eds.) Series on Applied Mathematics, vol. 14, pp 123–132. World Scientific Publishing Co (2002)

  30. Groenwold, A.A., Snyman, J.A., Stander, N.: Modified trajectory method for practical global optimization problems. AIAA J. 34(10), 2126–2131 (1996)

    MATH  Google Scholar 

  31. Hairer, E., Nørsett, S.P., Wanner, G.: Solving ordinary differential equations I: Nonstiff problems. Springer Series in Computational Mathematics, p. 1. Springer (2009)

  32. Henrici, P.: Discrete Variable Methods in Ordinary Differential Equations. Wiley (1962)

  33. Horst, P., Pardalos, P.M., Thoai, N.V.: Introduction to Global Optimization, 2nd edn. Kluwer Academic Publishers (2000)

  34. Himmelblau, D.M.: Applied Nonlinear Programming. McGraw-Hill Companies (1972)

  35. Incerti, S., Parisi, V., Zirilli, F.: A new method for solving nonlinear simultaneous equations. SIAM J. Numer. Anal. 16(5), 779–789 (1979)

    MathSciNet  MATH  Google Scholar 

  36. Inomata, S., Cumada, M.: On the golf method. Bull. Electron. Laboratory 25(3), 495–512 (1964)

    Google Scholar 

  37. Jain, M.K.: Numerical Solution of Differential Equations. Wiley, Eastern New Delhi (1979)

    MATH  Google Scholar 

  38. Kam, T.Y., Chang, R.R.: Optimal design of laminated composite plates with dynamic and static considerations. Comput. Struct. 32(2), 387–393 (1989)

    MATH  Google Scholar 

  39. Kam, T.Y., Lai, M.D.: Multilevel optimal design of laminated composite plate structures. Comput. Struct. 31(2), 197–202 (1989)

    Google Scholar 

  40. Kam, T.Y., Snyman, J.A.: Optimal design of laminated composite plates using a global optimization technique. Compos. Struct. 19(4), 351–370 (1991)

    Google Scholar 

  41. Kan, A.R., Timmer, G.T.: Stochastic methods for global optimization. Am. J. Math. Manag. Sci. 4(1-2), 7–40 (1984)

    MathSciNet  MATH  Google Scholar 

  42. Kazarlis, S.A., Papadakis, S.E., Theocharis, J., Petridis, V.: Microgenetic algorithms as generalized hill-climbing operators for GA optimization. IEEE Trans. Evol. Comput. 5(3), 204–217 (2001)

    Google Scholar 

  43. Lambert, J.D.: Numerical Methods for Ordinary Differential Systems: The Initial Value Problem. Wiley (1991)

  44. Laskari, E.C., Parsopoulos, K.E., Vrahatis, M.N.: Evolutionary operators in global optimization with dynamic search trajectories. Numer. Algor. 34(2–4), 393–403 (2003)

    MathSciNet  MATH  Google Scholar 

  45. Leung, Y.W., Wang, Y.: An orthogonal genetic algorithm with quantization for global numerical optimization. IEEE Trans. Evol. Comput. 5(1), 41–53 (2001)

    Google Scholar 

  46. Li, H., Zhang, Q.: Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 13(2), 284–302 (2009)

    Google Scholar 

  47. Massard, T.N.: Computer sizing of composite laminates for strength. J. Reinforced Plastics Compos. 3(4), 300–345 (1984)

    Google Scholar 

  48. Migdalas, A., Pardalos, P.M.: A note on open problems and challenges in optimization theory and algorithms. In: Open Problems in Optimization and Data Analysis, vol. 141, pp 1–8. Springer International Publishing AG, Cham (2018)

    MATH  Google Scholar 

  49. Pardalos, P.M., Migdalas, A. (eds.): Open Problems in Optimization and Data Analysis, vol. 141. Springer International Publishing AG, Cham (2018)

    Google Scholar 

  50. Parsopoulos, K.E., Vrahatis, M.N.: Particle swarm optimization method for constrained optimization problems. Intelligent Technologies–Theory and Application: New Trends in Intelligent Technologies 76(1), 214–220 (2002)

    MATH  Google Scholar 

  51. Parsopoulos, K.E., Vrahatis, M.N.: Particle swarm optimization method in multiobjective problems. In: Proceedings of the ACM Symposium on Applied Computing (SAC 2002), pp. 603–607 (2002)

  52. Parsopoulos, K.E., Vrahatis, M.N.: Particle Swarm Optimization and Intelligence: Advances and Applications. Information Science Publishing, IGI Global (2010)

  53. Petalas, Y.G., Tasoulis, D.K., Vrahatis, M.N.: Trajectory methods for neural network training. In: Hamza, M.H. (ed.) Artificial Intelligence and Applications, vol. 1, pp 400–408. IASTED/ACTA Press, USA (2004)

  54. Petalas, Y.G., Tasoulis, D.K., Vrahatis, M.N.: Dynamic search trajectory methods for neural network training. Lect. Notes Comput. Sci. (LNAI) 3070, 241–246 (2004)

    MATH  Google Scholar 

  55. Petalas, Y.G., Vrahatis, M.N.: Trajectory methods for supervised learning. In: Proceedings of the First International Conference from Scientific Computing to Computational Engineering (IC-SCCE 2004), September 8-10, Athens (2004)

  56. Plagianakos, V.P., Magoulas, G.D., Vrahatis, M.N.: Nonmonotone learning rules for backpropagation networks. In:: Proceedings of the Sixth IEEE International Conference on Electronics, Circuits and Systems (ICECS 1999), September 5-8, 1999, Pafos, Cyprus, vol. 1, art.no. 812280, pp. 291–294. IEEE (1999)

  57. Plagianakos, V.P., Magoulas, G.D., Vrahatis, M.N.: Deterministic nonmonotone strategies for effective training of multilayer perceptrons. IEEE Trans. Neural Netw. 13 (6), 1268–1284 (2002)

    Google Scholar 

  58. Plagianakos, V.P., Vrahatis, M.N., Magoulas, G.D.: Nonmonotone methods for backpropagation training with adaptive learning rate. In: IEEE Proceedings of the International Joint Conference on Neural Networks (IJCNN 1999), July 10-16, 1999, Washington DC, USA, vol. 3, art.no. 832644, pp. 1762–1767, IEEE (1999)

  59. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Google Scholar 

  60. Powell, M.J.: A fast algorithm for nonlinearly constrained optimization calculations. In: Numerical Analysis, pp 144–157. Springer (1978)

  61. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C. Cambridge University Press (2002)

  62. Price, W.: Global optimization by controlled random search. J. Optim. Theory Appl. 40(3), 333–348 (1983)

    MathSciNet  MATH  Google Scholar 

  63. Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7(1), 26–33 (1997)

    MathSciNet  MATH  Google Scholar 

  64. Roughgarden, T.: Beyond worst-case analysis. Commun. ACM 62(3), 88–96 (2019)

    Google Scholar 

  65. Salvadori, L.: Famiglie ad un parametro di funzioni di Liapunov nello studio della stabilita. In: Symposia Mathematica, vol. 6, pp 309–330. Academic Press (1971)

  66. Schäffler, S., Warsitz, H.: A trajectory-following method for unconstrained optimization. J. Optim. Theory Appl. 67(1), 133–140 (1990)

    MathSciNet  MATH  Google Scholar 

  67. Sengupta, S., Basak, S., Peters, R.: Particle swarm optimization: A survey of historical and recent developments with hybridization perspectives. Mach. Learn. Knowl. Extract. 1(1), 157–191 (2018)

    Google Scholar 

  68. Shang, Y., Wah, B.W.: Global optimization for neural network training. Computer 29(3), 45–54 (1996)

    Google Scholar 

  69. Snyman, J.A.: A new and dynamic method for unconstrained minimization. Appl. Math. Model. 6(6), 449–462 (1982)

    MathSciNet  MATH  Google Scholar 

  70. Snyman, J.A.: An improved version of the original leap-frog dynamic method for unconstrained minimization: LFOP1(b). Appl. Math. Model. 7(3), 216–218 (1983)

    MathSciNet  MATH  Google Scholar 

  71. Snyman, J.A., Fatti, L.P.: A multi-start global minimization algorithm with dynamic search trajectories. J. Optim. Theory Appl. 54(1), 121–141 (1987)

    MathSciNet  MATH  Google Scholar 

  72. Snyman, J.A., Geerthsen, K.A.: The practical application of a dynamic search-trajectory method for constrained global optimization. In: IUTAM Symposium on Optimization of Mechanical Systems, pp 285–292. Springer (1996)

  73. Snyman, J.A., Hay, A.M.: The dynamic-q optimization method: An alternative to SQP? Comput. Math. Appl. 44(12), 1589–1598 (2002)

    MathSciNet  MATH  Google Scholar 

  74. Snyman, J.A., Kok, S.: A reassessment of the Snyman-Fatti dynamic search trajectory method for unconstrained global optimization. J. Glob. Optim. 43(1), 67–82 (2009)

    MathSciNet  MATH  Google Scholar 

  75. Snyman, J.A., Stander, N., Roux, W.J.: A dynamic penalty function method for the solution of structural optimization problems. Appl. Math. Model. 18(8), 453–460 (1994)

    MATH  Google Scholar 

  76. Snyman, J.A., Wilke, D.N.: New Gradient-Based Trajectory and Approximation Methods. Springer Optimization and its Applications. In: Pardalos, P.M. (ed.) , vol. 133, pp 197–250. Springer International Publishing AG, Cham (2018)

  77. Snyman, J.A., Wilke, D.N.: Practical mathematical optimization: Basic optimization theory and gradient-based algorithms. In: Pardalos, P.M. (ed.) Springer Optimization and its Applications. 2nd edn., vol. 133. Springer International Publishing AG, Cham (2018)

  78. Soliman, S.A.H., Mantawy, A.A.H.: Modern Optimization Techniques with Applications in Electric Power Systems. Springer Science & Business Media (2011)

  79. Sörensen, K.: Metaheuristics-the metaphor exposed. Int. Trans. Oper. Res. 22 (1), 3–18 (2015)

    MathSciNet  MATH  Google Scholar 

  80. Storn, R., Price, K.: Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997)

    MathSciNet  MATH  Google Scholar 

  81. Tseng, L.Y., Chen, C.: Multiple trajectory search for multiobjective optimization. In: IEEE Congress on Evolutionary Computation, (CEC 2007), pp 3609–3616. IEEE (2007)

  82. Tseng, L.Y., Chen, C.: Multiple trajectory search for large scale global optimization. In: IEEE Congress on Evolutionary Computation, (CEC 2008), (IEEE World Congress on Computational Intelligence), pp 3052–3059. IEEE (2008)

  83. Vincent, T., Goh, B., Teo, K.: Trajectory-following algorithms for min-max optimization problems. J. Optim. Theory Appl. 75(3), 501–519 (1992)

    MathSciNet  MATH  Google Scholar 

  84. Vogl, T.P., Mangis, J., Rigler, A., Zink, W., Alkon, D.: Accelerating the convergence of the back-propagation method. Biol. Cybern. 59(4–5), 257–263 (1988)

    Google Scholar 

  85. Vavasis, S.A.: Complexity issues in global optimization: A survey. In: Horst, R., Pardalos, P.M. (eds.) Handbook of Global Optimization, pp 27–41. Kluwer Academic (1995)

  86. Vrahatis, M.N., Magoulas, G.D., Plagianakos, V.P.: From linear to nonlinear iterative methods. Appl. Numer. Math. 45(1), 59–77 (2003)

    MathSciNet  MATH  Google Scholar 

  87. Wales, D.J., Doye, J.P.: Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters containing up to 110 atoms. J. Phys. Chem. A 101(28), 5111–5116 (1997)

    Google Scholar 

  88. Walter, W.: Gewöhnliche differentialgleichungen. Springer (2000)

  89. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)

    Google Scholar 

  90. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969)

    MathSciNet  MATH  Google Scholar 

  91. Wolfe, P.: Convergence conditions for ascent methods II: Some corrections. SIAM Rev. 13, 185–188 (1971)

    MathSciNet  MATH  Google Scholar 

  92. Yi, J.H., Deb, S., Dong, J., Alavi, A.H., Wang, G.G.: An improved nsga-iii algorithm with adaptive mutation operator for big data optimization problems. Future Generation Computer Systems (2018)

  93. Zabinsky, Z.B.: Stochastic Adaptive Search for Global Optimization. Springer (2003)

  94. Zidkov, N., Siedrin, B.: A certain method of search for the minimum of a function of several variables. Comput. Methods Program. 10, 203–210 (1968)

    MathSciNet  Google Scholar 

  95. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999)

    Google Scholar 

  96. Zoutendijk, G.: Nonlinear Programming, Computational Methods. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp 37–86, North–Holland (1970)

Download references

Acknowledgements

The authors wish to thank the three anonymous reviewers for their helpful comments. S.-A. N. Alexandropoulos is supported by Greece and the European Union (European Social Fund-ESF) through the Operational Programme “Human Resources Development, Education and Lifelong Learning” in the context of the project “Strengthening Human Resources Research Potential via Doctorate Research” (MIS-5000432), implemented by the State Scholarships Foundation (IKY). P. M. Pardalos is supported by the Paul and Heidi Brown Preeminent Professorship at ISE (University of Florida, USA), and a Humboldt Research Award (Germany).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael N. Vrahatis.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alexandropoulos, SA.N., Pardalos, P.M. & Vrahatis, M.N. Dynamic search trajectory methods for global optimization. Ann Math Artif Intell 88, 3–37 (2020). https://doi.org/10.1007/s10472-019-09661-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-019-09661-7

Keywords

Mathematics Subject Classification (2010)

Navigation