Skip to main content

Advertisement

Log in

Vortex search optimization algorithm for training of feed-forward neural network

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Training of feed-forward neural-networks (FNN) is a challenging nonlinear task in supervised learning systems. Further, derivative learning-based methods are frequently inadequate for the training phase and cause a high computational complexity due to the numerous weight values that need to be tuned. In this study, training of neural-networks is considered as an optimization process and the best values of weights and biases in the structure of FNN are determined by Vortex Search (VS) algorithm. The VS algorithm is a novel metaheuristic optimization method recently developed, inspired by the vortex shape of stirred liquids. VS fulfills the training task to set the optimal weights and biases stated in a matrix. In this context, the proposed VS-based learning method for FNNs (VS-FNN) is conducted to analyze the effectiveness of the VS algorithm in FNN training for the first time in the literature. The proposed method is applied to six datasets whose names are 3-bit XOR, Iris Classification, Wine-Recognition, Wisconsin-Breast-Cancer, Pima-Indians-Diabetes, and Thyroid-Disease. The performance of the proposed algorithm is analyzed by comparing with other training methods based on Artificial Bee Colony Optimization (ABC), Particle Swarm Optimization (PSO), Simulated Annealing (SA), Genetic Algorithm (GA) and Stochastic Gradient Descent (SGD) algorithms. The experimental results show that VS-FNN is generally leading and competitive. It is also said that VS-FNN can be used as a capable tool for neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Abiodun OI, Jantan A, Omolara AE, Dada KV, Mohamed NA, Arshad H (2018) State-of-the-art in artificial neural network applications: a survey Heliyon 4. https://doi.org/10.1016/j.heliyon.2018.e00938

  2. Annema J (1995) Feed-forward neural networks. In: The Springer International Series in Engineering and Computer Science, vol 314. Springer, New York. https://doi.org/10.1007/978-1-4615-2337-6

  3. Arora S, Singh S (2015) Butterfly Algorithm with Levy Flights for Global Optimization. In: 2015 International Conference on Signal Processing, Computing and Control (ISPCC), Waknaghat, India, 24–26 Sept. 2015. IEEE, pp 220–224

  4. Askarzadeh A, Rezazadeh A (2013) Artificial neural network training using a new efficient optimization algorithm Appl. Soft Comput 13:1206–1213. https://doi.org/10.1016/j.asoc.2012.10.023

    Article  Google Scholar 

  5. Battiti R (1992) First- and second-order methods for learning: between steepest descent and Newton’s method. Neural Comput 4:141–166. https://doi.org/10.1162/neco.1992.4.2.141

    Article  Google Scholar 

  6. Biron PV (1997) Backpropagation: theory, architectures, and applications - Chauvin Y, Rumelhart, DE. J Am Soc Inform Sci 48:88–89

    Article  Google Scholar 

  7. Brajevic I, Tuba M Training feed-forward neural networks using firefly algorithm. In: Proceedings of the 12th International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED’13), 2013. pp 156–161

  8. Cao WP, Wang XZ, Ming Z, Gao JZ (2018) A review on neural networks with random weights. Neurocomputing 275:278–287. https://doi.org/10.1016/j.neucom.2017.08.040

    Article  Google Scholar 

  9. Dogan B, Olmez T (2015) A new metaheuristic for numerical function optimization: vortex search algorithm information sciences 293:125–145 https://doi.org/10.1016/j.ins.2014.08.053

  10. Frank A, Asuncion A (2010) Uci machine learning repository Irvine. University of California, School of Information and Computer Science, CA, p 21

    Google Scholar 

  11. Gudise VG, Venayagamoorthy GK (2003) Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium (SIS03), Indianapolis, IN, USA, USA, 26–26 April 2003. IEEE, pp 110–117. https://doi.org/10.1109/Sis.2003.1202255

  12. Hagan MT, Menhaj MB (1994) Training feedforward networks with the marquardt algorithm. IEEE T Neural Networ 5:989–993. https://doi.org/10.1109/72.329697

    Article  Google Scholar 

  13. Han F, Jiang J, Ling QH, Su BY (2019) A survey on metaheuristic optimization for random single-hidden layer feedforward neural network. Neurocomputing 335:261–273. https://doi.org/10.1016/j.neucom.2018.07.080

    Article  Google Scholar 

  14. Holland JH (1984) Genetic Algorithms and Adaptation. In: Selfridge OG, Rissland EL, Arbib MA (eds) Adaptive Control of Ill-Defined Systems. Springer, Boston, pp 317–333. https://doi.org/10.1007/978-1-4684-8941-5_21

  15. Ince T, Kiranyaz S, Pulkkinen J, Gabbouj M (2010) Evaluation of global and local training techniques over feed-forward neural network architecture spaces for computer-aided medical diagnosis. Expert Syst Appl 37:8450–8461. https://doi.org/10.1016/j.eswa.2010.05.033

    Article  Google Scholar 

  16. Karaboga D, Akay B Artificial bee colony (ABC) algorithm on training artificial neural networks. In: 2007 IEEE 15th Signal Processing and Communications Applications, Eskisehir, Turkey, 2007. IEEE, pp 818–821. doi:https://doi.org/10.1109/SIU.2007.4298679

  17. Kaur G, Arora S (2018) Chaotic whale optimization algorithm. J Comput Design Eng 5:275–284. https://doi.org/10.1016/j.jcde.2017.12.006

    Article  Google Scholar 

  18. Kennedy J, Eberhart R (1995) Particle Swarm Optimization. In: Proceedings of ICNN'95 - International Conference on Neural Networks, Perth, WA, Australia, Australia, 27 Nov.-1 Dec. 1995. IEEE, pp 1942–1948. https://doi.org/10.1109/ICNN.1995.488968

  19. Kirkpatrick S, Gelatt CD, Vecchi MP (1987) Optimization by Simulated Annealing. In: Fischler MA, Firschein O (eds) Readings in Computer Vision. Morgan Kaufmann, San Francisco, pp 606–615. https://doi.org/10.1016/B978-0-08-051581-6.50059-3

  20. Mangasarian OL, Wolberg WH (1990) Cancer diagnosis via linear programming. University of Wisconsin-Madison Department of Computer Sciences

  21. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey Wolf Optimizer. Adv Eng Softw 69:46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007

    Article  Google Scholar 

  22. Mirjalili S, Mohd Hashim SZ, Moradian Sardroudi H (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218:11125–11137. https://doi.org/10.1016/j.amc.2012.04.069

    Article  MathSciNet  MATH  Google Scholar 

  23. Mirjalili SZ, Saremi S, Mirjalili SM (2015) Designing evolutionary feedforward neural networks using social spider optimization algorithm. Neural Comput Appl 26:1919–1928. https://doi.org/10.1007/s00521-015-1847-6

    Article  Google Scholar 

  24. Mohandes SR, Zhang X, Mahdiyar A (2019) A comprehensive review on the application of artificial neural networks in building energy analysis. Neurocomputing 340:55–75. https://doi.org/10.1016/j.neucom.2019.02.040

    Article  Google Scholar 

  25. Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence, Detroit, Michigan, August 20 - 25, 1989. Morgan Kaufmann Publishers Inc., pp 762–767

  26. Ozturk C, Karaboga D (2011) Hybrid Artificial Bee Colony Algorithm for Neural Network Training. In: 2011 IEEE Congress on Evolutionary Computation (CEC), New Orleans, LA, USA. IEEE, pp 84–88. https://doi.org/10.1109/CEC.2011.5949602

  27. Piotrowski AP (2014) Differential evolution algorithms applied to neural network training suffer from stagnation. Appl Soft Comput 21:382–406. https://doi.org/10.1016/j.asoc.2014.03.039

    Article  Google Scholar 

  28. Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: A Gravitational Search Algorithm. Inf Sci 179:2232–2248. https://doi.org/10.1016/j.ins.2009.03.004

    Article  MATH  Google Scholar 

  29. Robbins H, Monro S (1951) A stochastic approximation method. Ann Math Stat 22:400–407

    Article  MathSciNet  Google Scholar 

  30. Simon D (2008) Biogeography-based optimization. IEEE T Evolut Comput 12:702–713. https://doi.org/10.1109/Tevc.2008.919004

    Article  Google Scholar 

  31. Socha K, Blum C (2007) An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput Appl 16:235–247. https://doi.org/10.1007/s00521-007-0084-z

    Article  Google Scholar 

  32. Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359. https://doi.org/10.1023/A:1008202821328

    Article  MathSciNet  MATH  Google Scholar 

  33. Swain RR, Khilar PM, Dash T (2020) Multifault diagnosis in WSN using a hybrid metaheuristic trained neural network. Digit Commun Netw 6:86–100. https://doi.org/10.1016/j.dcan.2018.02.001

    Article  Google Scholar 

  34. Tang R, Fong S, Deb S, Vasilakos AV, Millham RC (2018) Dynamic group optimisation algorithm for training feed-forward neural networks. Neurocomputing 314:1–19. https://doi.org/10.1016/j.neucom.2018.03.043

    Article  Google Scholar 

  35. Treadgold NK, Gedeon TD (1998) Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm. IEEE T Neural Networ 9:662–668. https://doi.org/10.1109/72.701179

    Article  Google Scholar 

  36. van der Smagt PP (1994) Minimization methods for training feedforward neural networks. Neural Netw 7:1–11

    Article  Google Scholar 

  37. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization IEEE transactions on evolutionary computation 1:67–82 https://doi.org/10.1109/4235.585893

  38. Xu F, Pun C-M, Li H, Zhang Y, Song Y, Gao H (2019) Training feed-forward artificial neural networks with a modified artificial bee colony algorithm. Neurocomputing. https://doi.org/10.1016/j.neucom.2019.04.086

    Article  Google Scholar 

  39. Yaghini M, Khoshraftar MM, Fallahi M (2013) A hybrid algorithm for artificial neural network training. Eng Appl Artif Intel 26:293–301. https://doi.org/10.1016/j.engappai.2012.01.023

    Article  Google Scholar 

  40. Yang XS (2009) Firefly algorithms for multimodal optimization stochastic algorithms: foundations and applications SAGA 2009 lecture notes in computer science 5792:169–178. https://doi.org/10.1007/978-3-642-04944-6_14

  41. Zhang J-R, Zhang J, Lok T-M, Lyu MR (2007) A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl Math Comput 185:1026–1037. https://doi.org/10.1016/j.amc.2006.07.025

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tahir Sağ.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sağ, T., Abdullah Jalil Jalil, Z. Vortex search optimization algorithm for training of feed-forward neural network. Int. J. Mach. Learn. & Cyber. 12, 1517–1544 (2021). https://doi.org/10.1007/s13042-020-01252-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-020-01252-x

Keywords

Navigation