Skip to main content
Log in

Dual Global Structure Preservation Based Supervised Feature Selection

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The recent literature indicates that the global structure preservation is very important for sparse representation based supervised feature selection. However, the selected features in preserving different global structures are often different and which global structure is the best not yet known. As a result, which feature selection result we should trust is confusing. The reason may be that each global structure does not carry enough information for the data, as the distribution of a real life data is very complex. To overcome the above problem, in this paper, a dual global structure preservation based supervised feature selection (DGSPSFS) method is proposed. In DGSPSFS, the supervised dimensional reduction method based on manifold learning is used to calculate the response matrix, which can contain more information of the data. And a new sparse representation framework that can preserve two global structures in the same time is proposed, which can comprehensively use two response matrices to fully utilize the information of the data. As a result, the features that can carry more information are selected. A comprehensive experimental study is then conducted in order to compare our feature selection algorithms with many state-of-the art ones in supervised learning scenarios. The conducted experiments validate the effectiveness of our feature selection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Zhu X, Li X, Zhang S, Chunhua J, Xindong W (2017) Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE Trans Neural Netw Learn Syst 28(6):1263–1275

    MathSciNet  Google Scholar 

  2. Wang S, Zhu W (2018) Sparse graph embedding unsupervised feature selection. IEEE Trans Syst Man Cybern 48(3):329–341

    Google Scholar 

  3. Liu X, Wang L, Zhang J, Yin J, Liu H (2018) Adaptive unsupervised feature selection with structure regularization. IEEE Trans Neural Netw Learn Syst 29(4):944–956

    Google Scholar 

  4. Shang R, Wang W, Stolkin R, Jiao L (2018) Non-negative spectral learning and sparse regression-based dual-graph regularized feature selection. IEEE Trans Cybern 48(2):793–806

    Google Scholar 

  5. Tang C, Liu X, Li M, Wang P, Chen J, Wang L, Li W (2018) Robust unsupervised feature selection via dual self-representation and manifold regularization. Knowl-Based Syst 145:109–120

    Google Scholar 

  6. Zheng W, Zhu X, Zhu Y, Hu R, Lei C (2018) Dynamic graph learning for spectral feature selection. Multimed Tools Appl 77(22):29739–29755

    Google Scholar 

  7. Meng Y, Shang R, Jiao L, Zhang W, Yuan Y, Yang S (2018) Feature selection based dual-graph sparse non-negative matrix factorization for local discriminative clustering. Neurocomputing 290:87–99

    Google Scholar 

  8. Zhu X, Zhang S, Rongyao H, Zhu Y, Song J (2018) Local and global structure preservation for robust unsupervised spectral feature selection. IEEE Trans Knowl Data Eng 30(3):517–529

    Google Scholar 

  9. Xiang S, Nie F, Meng G, Pan C, Zhang C (2018) Adaptive structure learning for low-rank supervised feature selection. Pattern Recognit Lett 109:89–96

    Google Scholar 

  10. Cheng X, Zhu Y, Song J, Wen G, He W (2017) A novel low-rank hypergraph feature selection for multi-view classification. Neurocomputing 253:115–121

    Google Scholar 

  11. He W, Cheng X, Rongyao H, Zhu Y, Wen G (2017) Feature self-representation base d hypergraph unsupervised feature selection via low-rank representation. Neurocomputing 253:127–134

    Google Scholar 

  12. Li Y, Lei C, Fang Y, Rongyao H, Li Y, Zhang S (2018) Unsupervised feature selection by combining subspace learning with feature self-representation. Pattern Recognit Lett 109:35–43

    Google Scholar 

  13. Rongyao H, Zhu X, Cheng D, He W, Yan Y, Song J, Zhang S (2017) Graph self-representation method for unsupervised feature selection. Neurocomputing 220:130–137

    Google Scholar 

  14. Quanmao L, Li X, Dong Y (2018) Structure preserving unsupervised feature selection. Neurocomputing 301:36–45

    Google Scholar 

  15. Liu Y, Liu K, Zhang C, Wang J, Wang X (2017) Unsupervised feature selection via diversity-induced self-representation. Neurocomputing 219:350–363

    Google Scholar 

  16. Zhou W, Chengdong W, Yi Y, Luo G (2017) Structure preserving non-negative feature self-representation for unsupervised feature selection. IEEE Access 5:8792–8803

    Google Scholar 

  17. Hou C, Jiao Y, Nie F, Luo T, Zhou Z-H (2017) 2D feature selection by sparse matrix regression. IEEE Trans Image Process 26:4255–4268

    MathSciNet  MATH  Google Scholar 

  18. Li C, Wang X, Dong W, Yan J, Liu Q, Zha H (2018) Joint active learning with feature selection via CUR matrix decomposition. IEEE Trans Pattern Anal Mach Intell 41(6):1382–1396

    Google Scholar 

  19. Zhu P, Qian X, Qinghua H, Zhang C (2018) Co-regularized unsupervised feature selection. Neurocomputing 275:2855–2863

    Google Scholar 

  20. Zhang R, Nie F, Li X (2018) Feature selection under regularized orthogonal least square regression with optimal scaling. Neurocomputing 273:547–553

    Google Scholar 

  21. Wan Y, Chen X, Zhang J (2018) Global and intrinsic geometric structure embedding for unsupervised feature selection. Expert Syst Appl 93:134–142

    Google Scholar 

  22. Feng S, Duarte MF (2018) Graph autoencoder-based unsupervised feature selection with broad and local data structure preservation. Neurocomputing 312:310–323

    Google Scholar 

  23. Zhang Z, Yiyang Tian L, Bai JX, Hancock E (2017) High-order covariate interacted Lasso for feature selection. Pattern Recognit Lett 87:139–146

    Google Scholar 

  24. Wang L, Zhua L, Dong X, Liu L, Sun J, Zhang H (2018) Joint feature selection and graph regularization for modality-dependent cross-modal retrieval. J Vis Commun Image Represent 54:213–222

    Google Scholar 

  25. Zhihong Zhang L, Bai YL, Hancock E (2017) Joint hypergraph learning and sparse regression for feature selection. Pattern Recognit 63:291–309

    MATH  Google Scholar 

  26. Du X, Yan Y, Pan P, Long G, Zhao L (2016) Multiple graph unsupervised feature selection. Signal Process 120:754–760

    Google Scholar 

  27. Lan G, Hou C, Nie F, Luo T, Yi D (2018) Robust feature selection via simultaneous sapped norm and sparse regularizer minimization. Neurocomputing 283:228–240

    Google Scholar 

  28. Tang C, Zhu X, Chen J, Wang P, Liu X, Tian J (2018) Robust graph regularized unsupervised feature selection. Expert Syst Appl 96:64–76

    Google Scholar 

  29. Shiqiang D, Ma Y, Li S, Ma Y (2017) Robust unsupervised feature selection via matrix factorization. Neurocomputing 241:115–127

    Google Scholar 

  30. Tong W, Zhou Y, Zhang R, Xiao Y, Nie F (2018) Self-weighted discriminative feature selection via adaptive redundancy minimization. Neurocomputing 275:2824–2830

    Google Scholar 

  31. Zhu P, Zhu W, Qinghua H, Zhang C, Zuo W (2017) Subspace clustering guided unsupervised feature selection. Pattern Recognit 66:364–374

    Google Scholar 

  32. Shang R, Wang W, Stolkin R, Jiao L (2016) Subspace learning-based graph regularized feature selection. Knowl-Based Syst 112:152–165

    Google Scholar 

  33. Qi M, Wang T, Liu F, Zhang B, Wang J, Yi Y (2018) Unsupervised feature selection by regularized matrix factorization. Neurocomputing 273:593–610

    Google Scholar 

  34. Wang S, Wang H (2017) Unsupervised feature selection via low-rank approximation and structure learning. Knowl-Based Syst 124:70–79

    Google Scholar 

  35. Zhu Y, Zhang X, Wen G, He W, Cheng D (2017) Double sparse-representation feature selection algorithm for classification. Multimed Tools Appl 76:17525–17539

    Google Scholar 

  36. Zhao Z, Wang L, Liu H, Ye J (2013) On similarity preserving feature selection. IEEE Trans Knowl Data Eng 25(3):619–632

    Google Scholar 

  37. Ye Q, Sun Y (2018) Weighted structure preservation and redundancy minimization for feature selection. Softcomputing 22(21):7255–7268

    Google Scholar 

  38. Li X, Zhang H, Zhang R, Liu Y, Nie F (2018) Generalized uncorrelated regression with adaptive graph for unsupervised feature selection. IEEE Trans Neural Netw Learn Syst 30:1587–1595 (in press)

    MathSciNet  Google Scholar 

  39. Zhang H, Zhang R, Nie F, Li X (2018) A generalized uncorrelated ridge regression with nonnegative labels for unsupervised feature selection. In: IEEE international conference on acoustics, speech and signal processing, pp 2781–2785

  40. Liu X, Wang L, Zhang J, Yin J, Liu H (2015) Global and local structure preservation for feature selection. IEEE Trans Cybern 25(6):1083–1095

    Google Scholar 

  41. Cai D, Zhang C, He X (2010) Unsupervised feature selection for multi-cluster data. In: International conference on Knowledge discovery and data mining, pp 333–342

  42. Nie F, Huang H, Cai X, Ding C (2010) Efficient and robust feature selection via joint L2,1-norms minimization. In: Neural information processing systems, pp 1813–1821

  43. Sun Y, Ye Q, Zhu R, Wen G (2018) Cognitive gravity model based semi-supervised dimension reduction. Neural Process Lett 47(1):253–276

    Google Scholar 

  44. Sun Y, Wen G (2017) Cognitive facial expression recognition with constrained dimensionality reduction. Neurocomputing 239:397–408

    Google Scholar 

  45. ORLface database. http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html. Accessed 1 Apr 1994

  46. Lee K-C, Ho J, Kriegman D (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698

    Google Scholar 

  47. Sim T, Barker S, Bsat M (2013) The CMU pose, illumination, and expression database. IEEE Trans Pattern Anal Mach Intell 25(12):1615–1618

    Google Scholar 

  48. Yale Face database, http://vision.ucsd.edu/content/yale-face-database. Accessed 10 Sept 1997

  49. Nene SA, Nayar SK, Murase H (1996) Columbia object image library (COIL-20). Technical report CUCS-005-96

  50. The web page of Cai, http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html. Accessed 1 May 2015

  51. Martinez A, Benavente R (1998) The AR face database. CVC Technical report 24

  52. Burkhardt F, Paeschke A, Rolfes M, Sendlmeier WF, Weiss B (2005) A database of German emotional speech. In: Proceedings of INTERSPEECH, Lisbon, pp 1517–1520

  53. Haq S, Jackson PJB (2009) Speaker-dependent audio-visual emotion recognition. In: Proceedings of AVSP, pp 53–58

  54. The selected Speech Emotion Database of Institute of Automation Chinese Academy of Sciences (CASIA). http://www.chineseldc.org/resource_info.php?rid=76. Accessed 9 Oct 2010

  55. Eyben F, Wöllmer M, Schuller B (2010) opensmile—the munich versatile and fast open-source audio feature extractor. In: Proceedings of ACM Multimedia (MM), Florence, Italy, pp 1459–1462

  56. Shi C, Ruan Q, An G (2014) Sparse feature selection based on graph Laplacian for web image annotation. Image Vis Comput 32(3):189–201

    Google Scholar 

  57. Zhou N, Yangyang X, Cheng H, Fang J, Pedrycz W (2016) Global and local structure preserving sparse subspace learning: an iterative approach to unsupervised feature selection. Pattern Recognit 53:87–101

    MATH  Google Scholar 

  58. Zhu Y, Zhong Z, Cao W, Cheng D (2016) Graph feature selection for dementia diagnosis. Neurocomputing 195:19–22

    Google Scholar 

  59. Peng H, Long F, Ding C (2005) Feature selection based on mutual information: criteria of max-dependency, max-relevance, and minredundancy. IEEE Trans Pattern Anal Mach Intell 27(8):1226–1238

    Google Scholar 

  60. Chang C-C, Lin C-J (2011) LIBSVM—a library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27

    Google Scholar 

  61. Martinez AM, Kak AC (2001) PCA versus LDA. Trans Pattern Anal Mach Intell 23(2):228–233

    Google Scholar 

  62. Luo T, Hou C, Yi D, Zhang J (2016) Discriminative orthogonal elastic preserving projections for classification. Neurocomputing 179(29):54–68

    Google Scholar 

  63. Wei J, Zeng Q-f, Wang X, Wang J-b, Wen G-h (2014) Integrating local and global topological structures for semi-supervised dimensionality reduction. Softcomputing 18(6):1189–1198

    Google Scholar 

Download references

Acknowledgements

The authors thank the members of Machine Learning and Artificial Intelligence Laboratory, School of Computer Science and Technology, Wuhan University of Science and Technology, for their helpful discussion within seminars. This work was supported by Zhejiang Provincial Natural Science Foundation (Nos. LQ18F020006, LQ18F020007), National Natural Science Foundation of China (Nos.61972299, U1803262, 61602349, 61702381), Jiaxing National Science Foundation (Nos. 2016AY13013, 2018AY11001). National Natural Science Foundation of Hubei University of Arts and Sciences (Nos. 931608, 2059060). Xiaolong Zhang is the corresponding author of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaolong Zhang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ye, Q., Zhang, X. & Sun, Y. Dual Global Structure Preservation Based Supervised Feature Selection. Neural Process Lett 51, 2765–2787 (2020). https://doi.org/10.1007/s11063-020-10225-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-020-10225-8

Keywords

Navigation