Skip to main content
Log in

A performance bound of the multi-output extreme learning machine classifier

  • Regular Research Paper
  • Published:
Memetic Computing Aims and scope Submit manuscript

Abstract

This paper concerns estimation of generalization performance of the multi-output extreme learning machine classifier (M-ELM) in the framework of statistical learning theory. The performance bound is derived under the assumption that the expectation of the extreme learning machine kernel exists. We first show that minimizing the least square error is equal to minimizing an upper bound of the error concerning the margin of M-ELM in the training set, which implies that M-ELM ends up with high confidence after training. Afterwards, we derive the bound based on the margin of M-ELM and the empirical Rademacher complexity. The bound not only gives a theoretical explanation of good performance of M-ELM especially in the small-sample cases, but also shows that the performance of M-ELM is insensitive to the number of hidden nodes, which is consistent with previous experimental results. The bound also offers an insight that the performance of M-ELM is not significantly affected by the number of classes, which proves the effectiveness of the learning process of M-ELM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Huang GB, Wang DH, Lan Y (2011) Extreme learning machines: a survey. Int J Mach Learn Cybern 2(2):107–122

    Article  Google Scholar 

  2. Huang G, Huang GB, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61(C):32–48

    Article  MATH  Google Scholar 

  3. Cao J, Zhang K, Luo M, Yin C, Lai X (2016) Extreme learning machine and adaptive sparse representation for image classification. Neural Netw 81(C):91

    Article  Google Scholar 

  4. Kim J, Kim J, Jang GJ, Lee M (2017) Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw 87:109–121

    Article  Google Scholar 

  5. Liu H, Li F, Xu X, Sun F (2018) Active object recognition using hierarchical local-receptive-field-based extreme learning machine. Memet Comput 10(2):233–241

    Article  Google Scholar 

  6. Zhang H, Zhang S, Yin Y (2018) Kernel online sequential ELM algorithm with sliding window subject to time-varying environments. Memet Comput 10(1):43–52

    Article  Google Scholar 

  7. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  8. Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern 42(2):513–529

    Article  Google Scholar 

  9. Chorowski J, Wang J, Zurada JM (2014) Review and performance comparison of SVM-and ELM-based classifiers. Neurocomputing 128:507–516

    Article  Google Scholar 

  10. Zhang L, Zhang D (2015) Evolutionary cost-sensitive extreme learning machine. IEEE Trans Neural Netw Learn Syst 28(12):3045–3060

    Article  MathSciNet  Google Scholar 

  11. Lei Z, Zhang D (2016) Robust visual knowledge transfer via extreme learning machine-based domain adaptation. IEEE Trans Image Process Publ IEEE Signal Process Soc 25(10):4959–4973

    MathSciNet  MATH  Google Scholar 

  12. Zhang L, He Z, Liu Y (2017) Deep object recognition across domains based on adaptive extreme learning machine. Neurocomputing 239:194–203

    Article  Google Scholar 

  13. Lu H, Du B, Liu J, Xia H, Yeap WK (2017) A kernel extreme learning machine algorithm based on improved particle swam optimization. Memet Comput 9(2):121–128

    Article  Google Scholar 

  14. Lu C, Ke H, Zhang G, Mei Y, Xu H (2017) An improved weighted extreme learning machine for imbalanced data classification. Memet Comput 1:1–8

    Google Scholar 

  15. Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cognit Comput 6(3):376–390

    Article  Google Scholar 

  16. Huang GB (2015) What are extreme learning machines? Filling the gap between Frank Rosenblatts dream and John Von Neumanns puzzle. Cognit Comput 7:263–278

    Article  Google Scholar 

  17. Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17:879–892

    Article  Google Scholar 

  18. Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70:3056–3062

    Article  Google Scholar 

  19. Liu X, Gao C, Li P (2012) A comparative analysis of support vector machines and extreme learning machines. Neural Netw 33:58–66

    Article  MATH  Google Scholar 

  20. Liu X, Lin S, Fang J, Xu Z (2015) Is extreme learning machine feasible? A theoretical assessment (part I). IEEE Trans Neural Netw Learn Syst 26:7–20

    Article  MathSciNet  Google Scholar 

  21. Lin S, Liu X, Fang J, Xu Z (2015) Is extreme learning machine feasible? A theoretical assessment (part II). IEEE Trans Neural Netw Learn Syst 26:21–34

    Article  MathSciNet  Google Scholar 

  22. Rahimi A, Recht B (2009) Weighted sums of random kitchen sinks: replacing minimization with randomization in learning. In: Advances in neural information processing systems. Curran Associates Inc, Vancouver, pp 1313–1320

  23. Wang D, Wang P, Ji Y (2015) An oscillation bound of the generalization performance of extreme learning machine and corresponding analysis. Neurocomputing 151:883–890

    Article  Google Scholar 

  24. Vapnik V (1998) Statistical learning theory, vol 1. Wiley, New York

    MATH  Google Scholar 

  25. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44:525–536

    Article  MathSciNet  MATH  Google Scholar 

  26. Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  27. Schapire RE, Freund Y, Barlett P, Lee WS (1998) Boosting the margin: a new explanation for the effectiveness of voting methods. Ann Statist 26:1651C1686

    MathSciNet  Google Scholar 

  28. Rudin W (1964) Principles of mathematical analysis, vol 3. McGraw-Hill, New York

    MATH  Google Scholar 

  29. Hermans M, Schrauwen B (2012) Recurrent kernel machines: computing with infinite echo state networks. Neural Comput 24:104–133

    Article  MathSciNet  MATH  Google Scholar 

  30. Koltchinskii V, Panchenko D (2002) Empirical margin distributions and bounding the generalization error of combined classifiers. Ann Stat 30(1):1–50

    MathSciNet  MATH  Google Scholar 

  31. Kuznetsov V, Mohri M, Syed U (2014) Multi-class deep boosting. Adv Neural Inf Process Syst 3:2501–2509

    Google Scholar 

Download references

Acknowledgements

This work is supported by the Study of the Hail Potential Forecasting Techniques Based on Data Mining in Tianjin (Grant No. 2016FH-0011).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Di Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Wang, P. & Shi, J. A performance bound of the multi-output extreme learning machine classifier. Memetic Comp. 11, 297–304 (2019). https://doi.org/10.1007/s12293-018-0270-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12293-018-0270-9

Keywords

Navigation