Skip to main content
Log in

An Effective Principal Singular Triplets Extracting Neural Network Algorithm

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper, we propose an effective neural network algorithm to perform singular value decomposition (SVD) of a cross-correlation matrix between two data streams. Different from traditional algorithms, the newly proposed algorithm can extract not only the principal singular vectors but also the corresponding principal singular values. First, a dynamical system is obtained from the gradient flow, which is obtained from optimization of a novel information criterion. Then, based on the dynamical system, a stable neural network algorithm, which can extract the left and right principal singular vectors, is obtained. Moreover, by satisfying orthogonality instead of orthonormality, we are able to extract the normalization scale factor as the corresponding singular value. In this case, the principal singular triplet (principal singular vectors and the corresponding singular value) of the cross-correlation matrix can be extracted by using the proposed algorithm. What’s more, the proposed algorithm can also be used for multiple PSTs extraction on the basis of sequential method. Then, convergence analysis shows that the proposed algorithm converges to the stable equilibrium point with probability 1. Last, experiment results show that the proposed algorithm is fast and stable in convergence, and can also extract multiple PSTs efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Cichocki A (1992) Neural network for singular value decomposition. Electron Lett 28(8):784–786

    Article  Google Scholar 

  2. Cichocki A, Unbehauen R (1992) Neural networks for computing eigenvalues and eigenvectors. Biol Cyber 68(2):155–164

    Article  Google Scholar 

  3. Comon P, Golub GH (1990) Tracking a few extreme singular values and vectors in signal processing. Proc IEEE 78(8):1327–1343

    Article  Google Scholar 

  4. Diamantaras KI, Kung SY (1994) Cross-correlation neural network models. IEEE Trans Sig Process 42(11):3218–3223

    Article  Google Scholar 

  5. Fei SW (2017) Fault diagnosis of bearing based on wavelet packet transform-phase space reconstruction-singular value decomposition and SVM classifier. Arabian J Sci Eng 42(5):1967–1975

    Article  Google Scholar 

  6. Feng DZ, Bao Z, Shi WX (1998) Cross-correlation neural network models for the smallest singular component of general matrix. Sig process 64(3):333–346

    Article  Google Scholar 

  7. Feng DZ, Bao Z, Zhang XD (2001) A cross-associative neural network for SVD of non-squared data matrix in signal processing. IEEE Trans Neural Netw 12(5):1215–1221

    Article  Google Scholar 

  8. Feng DZ, Zhang XD, Bao Z (2004) A neural network learning for adaptively extracting cross-correlation features between two high-dimensional data streams. IEEE Trans Neural Netw 15(6):1541–1554

    Article  Google Scholar 

  9. Feng XW, Kong XY, Xu DH, Qin JQ (2017) A fast and effective principal singular subspace tracking algorithm. Neurocomputing 267(6):201–209

    Article  Google Scholar 

  10. Fiori S (2003) Singular value decomposition learning on double stiefel manifold. Int J Neural Syst 13(03):155–170

    Article  Google Scholar 

  11. Gaaf SW, Simoncini V (2017) Approximating the leading singular triplets of a large matrix function. Appl Num Math 113:26–43

    Article  MathSciNet  Google Scholar 

  12. Hasan MA (2010) Low rank approximation of a set of matrices. In: Proceedings of 2010 IEEE international symposium on circuits and systems (ISCAS). IEEE, pp 3517–3520

  13. Hasan MA (2008) A logarithmic cost function for principal singular component analysis. In: IEEE international conference on acoustics, speech and signal processing, 2008. ICASSP 2008. IEEE, pp 1933–1936

  14. Hasan MA (2008) Low-rank approximations with applications to principal singular component learning systems. In: 47th IEEE conference on decision and control, 2008. CDC 2008. IEEE, pp 3293–3298

  15. Hori G (2003) A general framework for SVD flows and joint SVD flows. In: IEEE international conference on acoustics, speech, and signal processing, 2003. Proceedings (ICASSP’03), vol 2. IEEE, pp. 11–693

  16. Jain P, Tyagi V (2016) An adaptive edge-preserving image denoising using block-based singular value decomposition in wavelet domain. Springer Singapore

  17. Kaiser AH, Schenck W, Möller R (2010) Coupled singular value decomposition of a cross-correlation matrix. Int J Neural Syst 20(04):293–318

    Article  Google Scholar 

  18. Kong XY, Ma HG, An QS, Zhang Q (2014) An effective neural learning algorithm for extracting cross-correlation feature between two high-dimensional data streams. Neural Process Lett 42:459–477

  19. Lei L, Kok KT, Tong HL (2014) SVD-based accurate identification and compensation of the coupling hysteresis and creep dynamics in piezoelectric actuators. Asian J Control 16(1):59–69

    Article  MathSciNet  Google Scholar 

  20. Che ML, Wei YM (2019) Randomized algorithms for the approximations of tucker and the tensor train decompositions. Adv Comput Math 2019(45):395–428

    Article  MathSciNet  Google Scholar 

  21. Moonen M, Dooren PV, Vandewalle J (1992) A singular value decomposition updating algorithm for subspace tracking. SIAM J Matrix Anal Appl 13(4):1015–1038

    Article  MathSciNet  Google Scholar 

  22. Moore J, Mahony R, Helmke U (1994) Numerical gradient algorithms for eigenvalue and singular value calculations. SIAM J Matrix Anal Appl 15(3):881–902

    Article  MathSciNet  Google Scholar 

  23. Niu D, Meng J (2016) Improving approximate singular triplets in lanczos bidiagonalization method. Taiwanese J Math 20(4):943–956

  24. Qian K, Zhou HX, Rong SH, Wang BJ, Cheng KH (2017) Infrared dim-small target tracking via singular value decomposition and improved kernelized correlation filter. Inf Phys Technol 82:18–27

    Article  Google Scholar 

  25. Wang JW, Le NT, Lee JS, Wang CC (2016) Color face image enhancement using adaptive singular value decomposition in fourier domain for face recognition. Patt Recogn 57(C):31–49

    Article  Google Scholar 

  26. Wang XZ, Che ML, Wei YM (2016) Recurrent neural network for computation of generalized eigenvalue problem with real diagonalizable matrix pair and its applications. Neurocomputing 216:230–241

    Article  Google Scholar 

  27. Wang XZ, Che ML, Wei YM (2017) Complex-valued neural networks for the takagi vector of complex symmetric matrices. Neurocomputing 223(5):77–85

    Article  Google Scholar 

  28. Feng XW, Kong XY, Ma HG (2016) Coupled cross-correlation neural network algorithm for principal singular triplet extraction of a cross-correlation matrix. IEEE/CAA J Autom Sinica 3(2):149–156

    Article  MathSciNet  Google Scholar 

  29. Xie PP, Xiang H, Wei YM (2018) Randomized algorithms for total least squares problems: Randomized algorithms for TLS. Num Linear Algeb Appl 26(6):e2219

    MATH  Google Scholar 

  30. Wei YM, Xie PP, Zhang LP (2016) Tikhonov regularization and randomized GSVD. Siam J Matrix Anal Appl 37(2):649–675

    Article  MathSciNet  Google Scholar 

  31. Zhang LP, Wei YM, Chu KW (2020) Neural network for computing GSVD and RSVD. Neurocomputing 444(10):59–66

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangyu Kong.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors gratefully acknowledge that this work is supported in part by National Natural Science Foundation of China under Grant 61903375 and 61673387, and in part by the Natural Science Foundation of Shaanxi Province under Grant 2020JM-356.

Computational Complexity

Computational Complexity

Here we discuss the computational complexity of the proposed algorithm in comparison with previous work proposed by Hasan [13], Kaiser [17], Kong [18] and Feng [28].

Clearly, computing the product of two matrices with dimensions \(m\times n\) and \(n\times l\), respectively, requires a total of \(m\times n\times l\) multiplications. In this case, it takes mn multiplications for computing both \({\varvec{A}}(k){\varvec{v}}(k)\) and \({\varvec{A}}^T(k){\varvec{u}}(k)\), and takes 2mn multiplications for computing \({\varvec{u}}^T(k){\varvec{A}}(k){\varvec{v}}(k)\). Hence, the proposed algorithm requires a total of \(mn+2m+n\) and \(mn+m+2n\) multiplications for updating \({\varvec{u}}(k+1)\) and \({\varvec{v}}(k+1)\), respectively. As a result, the computational complexity of the proposed algorithm is \(2mn+3m+3n\). Similarly, the computational complexity of other work can also be obtained. A summary of computational complexities of all mentioned algorithm are listed in Table 1. We find that the proposed algorithm has the lowest computational load. Actually, these are two main reason why the proposed algorithm has lower computational complexity. First, different from coupled algorithms, singular value computing in our algorithm is not needed for singular vectors computing, so singular value computing is not necessary in each step. Second, the proposed algorithm does not contain the term \({\varvec{u}}^T(k){\varvec{A}}(k){\varvec{v}}(k)\) which exists in all other compared algorithms.

Table 1 Computational complexity and running time

Next, the running time of all algorithms are obtained and shown in Table 1. In this simulation, similar to (41), a total number of 100000 Gaussian white sequence \({\varvec{x}}(k)\) and \({\varvec{y}}(k)\) with dimension 30 and 20, respectively, are generated for computing. The running time is obtained from the average of a total of 100 runs in MATLAB R2016b. The computer has a CPU of Core i7-10510U and a RAM of 16 GB.

From Table 1, it is found that Hasan’s algorithm has similar computational complexity but less running time compared with other complicated algorithms. This is due to the fact that Hassan’s algorithm has less number of addition calculations than these algorithms. Moreover, the proposed algorithm costs the least time of all five algorithms. In conclusion, the proposed algorithm has both lowest computational complexity and running time.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, X., Kong, X., Xu, Z. et al. An Effective Principal Singular Triplets Extracting Neural Network Algorithm. Neural Process Lett 53, 2795–2811 (2021). https://doi.org/10.1007/s11063-021-10522-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10522-w

Keywords

Navigation