Skip to main content
Log in

A novel semi-supervised support vector machine with asymmetric squared loss

  • Regular Article
  • Published:
Advances in Data Analysis and Classification Aims and scope Submit manuscript

Abstract

Laplacian support vector machine (LapSVM), which is based on the semi-supervised manifold regularization learning framework, performs better than the standard SVM, especially for the case where the supervised information is insufficient. However, the use of hinge loss leads to the sensitivity of LapSVM to noise around the decision boundary. To enhance the performance of LapSVM, we present a novel semi-supervised SVM with the asymmetric squared loss (asy-LapSVM) which deals with the expectile distance and is less sensitive to noise-corrupted data. We further present a simple and efficient functional iterative method to solve the proposed asy-LapSVM, in addition, we prove the convergence of the functional iterative method from two aspects of theory and experiment. Numerical experiments performed on a number of commonly used datasets with noise of different variances demonstrate the validity of the proposed asy-LapSVM and the feasibility of the presented functional iterative method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/index.php.

  2. http://people.cs.uchicago.edu/~vikass/manifoldregularization.html.

  3. http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php.

  4. http://www-i6.informatik.rwth-aachen.de/~keysers/usps.html.

  5. http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html.

  6. http://archive.ics.uci.edu/ml/datasets/Multiple+Features.

  7. http://lms.comp.nus.edu.sg/research/NUS-WIDE.html.

References

  • Balasundaram S, Benipal G (2016) On a new approach for Lagrangian support vector regression. Neural Comput Appl. 29(9):533–551

    Article  Google Scholar 

  • Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434

    MathSciNet  MATH  Google Scholar 

  • Bi J, Zhang T (2004) Support vector classification with input data uncertainty. Neural Inf Process Syst (NIPS) 17:161–168

    Google Scholar 

  • Calma A, Reitmaier T, Sick B (2018) Semi-supervised active learning for support vector machines: a novel approach that exploits structure information in data. Inform Sci 456:13–33

    Article  MathSciNet  Google Scholar 

  • Chapelle O, Scholkopf B, Zien A (2006) Semi-supervised learning. MIT Press, Cambridge

    Book  Google Scholar 

  • Chen W, Shao Y, Xu D, Fu Y (2014) Manifold proximal support vector machine for semi-supervised classification. Appl Intell 40:623–638

    Article  Google Scholar 

  • Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  • Du B, Tang X, Wang Z, Zhang L, Tao D (2019) Robust graph-based semisupervised learning for noisy labeled data via maximum correntropy criterion. IEEE Trans Cybern 49(4):1440–1453

    Article  Google Scholar 

  • Fung G, Mangasarian OL (2004) A feature selection Newton method for support vector machine classification. Comput Optim Appl 28(2):185–202

    Article  MathSciNet  Google Scholar 

  • Gu N, Fan P, Fan M, Wang D (2019) Structure regularized self-paced learning for robust semi-supervised pattern classification. Neural Comput Appl 31(10):6559–6574

    Article  Google Scholar 

  • Guzella TS, Caminhas WM (2009) A review of machine learning approaches to spam filtering. Expert Syst Appl 36(7):10206–10222

    Article  Google Scholar 

  • Huang G, Zhu Q, Siew C (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501 (Neural Networks Selected Papers from the 7th Brazilian Symposium on Neural Networks, SBRN’04)

    Article  Google Scholar 

  • Huang X, Shi L, Suykens JAK (2014) Support vector machine classifier with pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997

    Article  Google Scholar 

  • Huang X, Shi L, Suykens JAK (2014) Asymmetric least squares support vector machine classifiers. Comput Stat Data Anal 70:395–405

    Article  MathSciNet  Google Scholar 

  • Huang G, Song S, Gupta J, Wu C (2014) Semi-supervised and unsupervised extreme learning machines. IEEE Trans Cybern 44:2405–2417

    Article  Google Scholar 

  • Jumutc V, Huang X, Suykens JAK (2013) Fixed-size Pegasos for hinge and pinball loss SVM. In: Proceedings of the international joint conference on neural network, Dallas, TX, USA. pp 1122–1128

  • Khemchandani R, Pal A (2016) Multi-category Laplacian least squares twin support vector machine. Appl Intell 45:458–474

    Article  Google Scholar 

  • Koenker R (2005) Quantile regression. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Li Z, Tian Y, Li K, Zhou F, Yang W (2017) Reject inference in credit scoring using semi-supervised support vector machines. Expert Syst Appl 74:105–114

    Article  Google Scholar 

  • Lu L, Lin Q, Pei H, Zhong P (2018) The aLS-SVM based multi-task learning classifiers. Appl Intell 48:2393–2407

    Article  Google Scholar 

  • Ma J, Wen Y, Yang L (2019) Lagrangian supervised and semi-supervised extreme learning machine. Appl Intell 49(2):303–318

    Article  Google Scholar 

  • Melki G, Kecman V, Ventura S, Cano A (2018) OLLAWV: online learning algorithm using worst-violators. Appl Soft Comput 66:384–393

    Article  Google Scholar 

  • Pei H, Chen Y, Wu Y, Zhong P (2017) Laplacian total margin support vector machine based on within-class scatter. Knowl-Based Syst 119:152–165

    Article  Google Scholar 

  • Pei H, Wang K, Zhong P (2017) Semi-supervised matrixized least squares support vector machine. Appl Soft Comput 61:72–87

    Article  Google Scholar 

  • Pei H, Wang K, Lin Q, Zhong P (2018) Robust semi-supervised extreme learning machine. Knowl-Based Syst 159:203–220

    Article  Google Scholar 

  • Scardapane S, Fierimonte R, Lorenzo PD, Panella M, Uncini A (2016) Distributed semi-supervised support vector machines. Neural Netw. 80:43–52

    Article  Google Scholar 

  • Shivaswamy P, Bhattacharyya C, Smola A (2006) Second order cone programming approaches for handling missing and uncertain data. J Mach Learn Res 7:1283–1314

    MathSciNet  MATH  Google Scholar 

  • Steinwart I, Christmann A (2008) Support vector machines. Springer, New York

    MATH  Google Scholar 

  • Sun S (2013) Multi-view Laplacian support vector machines. Appl Intell 41(4):209–222

    Google Scholar 

  • Tikhonov AN (1963) Regularization of incorrectly posed problems. Sov. Math. Dokl 4:1624–1627

    MATH  Google Scholar 

  • Tur G, Hakkani-Tür D, Schapire RE (2005) Combining active and semi-supervised learning for spoken language understanding. Speech Commun 45(2):171–186

    Article  Google Scholar 

  • Vapnik VN (1995) The nature of statistical learning theory. Springer, New York

    Book  Google Scholar 

  • Wang K, Zhong P (2014) Robust non-convex least squares loss function for regression with outliers. Knowl-Based Syst 71:290–302

    Article  Google Scholar 

  • Wang K, Zhu W, Zhong P (2015) Robust support vector regression with generalized loss function and applications. Neural Process Lett 41:89–106

    Article  Google Scholar 

  • Xu H, Caramanis C, Mannor S (2009) Robustness and regularization of support vector machines. J Mach Learn Res 10:1485–1510

    MathSciNet  MATH  Google Scholar 

  • Ye J (2005) Generalized low rank approximations of matrices. Mach Learn 61(1–3):167–191

    Article  Google Scholar 

  • Zhang T, Liu S, Xu C, Lu H (2011) Boosted multi-class semi-supervised learning for human action recognition. Pattern Recognit 44(10–11):2334–2342

    Article  Google Scholar 

  • Zhao J, Xu Y, Fujita H (2019) An improved non-parallel Universum support vector machine and its safe sample screening rule. Knowl-Based Syst 170:79–88

    Article  Google Scholar 

  • Zhong P (2012) Training robust support vector regression with smooth non-convex loss function. Optim Methods Softw 27(6):1039–1058

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ping Zhong.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pei, H., Lin, Q., Yang, L. et al. A novel semi-supervised support vector machine with asymmetric squared loss. Adv Data Anal Classif 15, 159–191 (2021). https://doi.org/10.1007/s11634-020-00390-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11634-020-00390-y

Keywords

Mathematics Subject Classification

Navigation