Skip to main content
Log in

Performance guarantees of transformed Schatten-1 regularization for exact low-rank matrix recovery

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Low-rank matrix recovery aims to recover a matrix of minimum rank that subject to linear system constraint. It arises in various real world applications, such as recommender systems, image processing, and deep learning. Inspired by compressive sensing, the rank minimization can be relaxed to nuclear norm minimization. However, such a method treats all singular values of target matrix equally. To address this issue, recently the transformed Schatten-1 (TS1) penalty function was proposed and utilized to construct low-rank matrix recovery models. Unfortunately, the method for TS1-based models cannot provide both convergence accuracy and convergence speed. To alleviate such problems, this paper further investigates the basic properties of TS1 penalty function. And we describe a novel algorithm, which we called ATS1PGA, that is highly efficient in solving low-rank matrix recovery problems at a convergence rate of O(1/N), where N denotes the iterate count. In addition, we theoretically prove that the original rank minimization problem can be equivalently transformed into the TS1 optimization problem under certain conditions. Finally, extensive experimental results on real image data sets show that our proposed algorithm outperforms state-of-the-art methods in both accuracy and efficiency. In particular, our proposed algorithm is about 30 times faster than TS1 algorithm in solving low-rank matrix recovery problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Koren Y (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining, pp 426–434

  2. Luo X, Zhou M, Li S, You Z, Xia Y, Zhu Q (2016) A non-negative latent factor model for large-scale sparse matrices in recommender systems via alternating direction method. IEEE Trans Neural Netw Learn Syst 27(3):579–592

    Article  MathSciNet  Google Scholar 

  3. Huang C, Ding X, Fang C, Wen D (2014) Robust image restoration via adaptive low-rank approximation and joint kernel regression. IEEE Trans Image Process 23(12):5284–5297

    Article  MathSciNet  Google Scholar 

  4. Chen B, Yang Z, Yang Z (2018) An algorithm for low-rank matrix factorization and its applications. Neurocomputing 275:1012–1020

    Article  Google Scholar 

  5. Zhao F, Peng J, Cui A (2020) Design strategy of thresholding operator for low-rank matrix recovery problem. Signal Process 171:1–10

    Article  Google Scholar 

  6. Luo X, Zhou M, Li S, Xia Y, You Z, Zhu Q, Leung H (2018) Incorporation of efficient second-order solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans Cybern 48(4):1216–1228

    Article  Google Scholar 

  7. Fan J, Chow T (2017) Deep learning based matrix completion. Neurocomputing 266:791–803

    Article  Google Scholar 

  8. Peng X, Zhang Y, Tang H (2016) A unified framework for representation-based subspace clustering of out-of-sample and large-scale data. IEEE Trans Neural Netw Learn Syst 27(12):2499–2512

    Article  MathSciNet  Google Scholar 

  9. Liu G, Liu Q, Yuan X (2017) A new theory for matrix completion. In: Proceedings of the advances in neural information processing systems, pp 785–794

  10. Liu G, Liu Q, Li P (2016) Low-rank matrix completion in the presence of high coherence. IEEE Trans Signal Process 64(21):5623–5633

    Article  MathSciNet  Google Scholar 

  11. Fazel M (2002) Matrix rank minimization with applications. Ph.D. thesis, Stanford University

  12. Recht B, Fazel M, Parrilo P (2010) Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev 52(3):471–501

    Article  MathSciNet  Google Scholar 

  13. Cai J-F, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982

    Article  MathSciNet  Google Scholar 

  14. Toh K-C, Yun S (2010) An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J Optim 6(3):615–640

    MathSciNet  MATH  Google Scholar 

  15. Yao Q, Kwok J (2015) Accelerated inexact soft-impute for fast large scale matrix completion. In: Proceedings of the international joint conference on artificial intelligence, pp 4002–4008

  16. Wang Z, Wang W, Wang J, Chen S (2019) Fast and efficient algorithm for matrix completion via closed-form 2/3-thresholding operator. Neurocomputing 330:212–222

    Article  Google Scholar 

  17. Wang Z, Gao C, Luo X, Tang M, Wang J, Chen W (2020) Accelerated inexact matrix completion algorithm via closed-form q-thresholding \((q=1/2,2/3)\) operator. Int J Mach Learn Cybern 11:2327–2339

    Article  Google Scholar 

  18. Wang Z, Liu Y, Luo X, Wang J, Gao C, Peng D, Chen W (2021) Large-scale affine matrix rank minimization with a novel nonconvex regularizer. IEEE Trans Neural Netw Learn Syst (to be published)

  19. Gu S, Zhang L, Zuo W, Feng X (2014) Weighted nuclear norm minimization with application to image denoising. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2862–2869

  20. Fan J, Li R (2001) Variable selection via nonconcal penalized likelihood and its oracle properties. J Am Stat Assoc 96(456):1348–1360

    Article  Google Scholar 

  21. Zhang C (2010) Nearly unbiased variable selection under minimax concave penalty. Ann Stat 38(2):894–942

    Article  MathSciNet  Google Scholar 

  22. Candès EJ, Wakin M, Boyd S (2008) Enhancing sparsity by reweighted \(l_{1}\) minimization. J Fourier Anal Appl 14:877–905

    Article  MathSciNet  Google Scholar 

  23. Peng D, Xiu N, Yu J (2017) \(S_{1/2}\) regularization methods and fixed point algorithms for affine rank minimization problems. Comput Optim Appl 67:543–569

    Article  MathSciNet  Google Scholar 

  24. Lu C, Tang J, Yan S, Lin Z (2016) Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm. IEEE Trans Image Process 25(1):829–839

    Article  MathSciNet  Google Scholar 

  25. Yao Q, Kwok J, Wang T, Liu T (2019) Large-scale low-rank matrix learning with nonconvex regularizers. IEEE Trans Pattern Anal Mach Intell 41(11):2628–2643

    Article  Google Scholar 

  26. Zhang S, Yin P, Xin J (2017) Transformed Schatten-1 iterative thresholding algorithms for low rank matrix completion. Commun Math Sci 15(3):839–862

    Article  MathSciNet  Google Scholar 

  27. Lv J, Fan Y (2009) A unified approach to model selection and sparse recovery using regularized least squares. Ann Stat 37(6):3498–3528

    MathSciNet  MATH  Google Scholar 

  28. Kang Z, Peng C, Cheng Q (2015) Robust PCA via nonconvex rank approximation. In: Proceedings of IEEE international conference on data mining, pp 211–220

  29. Zhang S, Xin J (2017) Minimization of transformed \(L_{1}\) penalty: closed form representation and iterative thresholding algorithms. Commun Math Sci 15(2):511–537

    Article  MathSciNet  Google Scholar 

  30. Zhang S, Xin J (2018) Minimization of transformed \(L_{1}\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing. Math Progr 169(1–2):307–336

    Article  Google Scholar 

  31. Cui A, Peng J, Li H (2018) Exact recovery low-rank matrix via transformed affine matrix rank minimization. Neurocomputing 319:1–12

    Article  Google Scholar 

  32. Parikh N, Boyd S (2014) Proximal algorithms. Found Trends Optim 1(3):127–239

    Article  Google Scholar 

  33. Schmidt M, Roux N, Bach F (2011) Convergence rates of inexact proximal gradient methods for convex optimization. In: Proceedings of the advances in neural information processing systems, pp 1458–1466

  34. Gu B, Huo Z, Huang H (2018) Inexact proximal gradient methods for non-convex and non-smooth optimization. In: Proceedings of the twenty-second AAAI conference on artificial intelligence, pp 3093–3100

  35. Li H, Lin Z (2015) Accelerated proximal gradient methods for nonconvex programming. In: Proceedings of the advances in neural information processing systems, pp 379–387

  36. Wang Z, Lai M, Lu Z, Fan W, Davulcu H, Ye J (2015) Orthogonal rank-one matrix pursuit for low rank matrix completion. SIAM J Sci Comput 37(1):A488–A514

    Article  MathSciNet  Google Scholar 

  37. Halko N, Martinsson P, Tropp J (2011) Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev 53(2):217–288

    Article  MathSciNet  Google Scholar 

  38. Oh T, Matsushita Y, Tai Y, Kweon I (2018) Fast randomized singular value thresholding for low-rank optimization. IEEE Trans Pattern Anal Mach Intell 40(2):376–391

    Article  Google Scholar 

  39. Nesterov Y (1983) A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl Akad Nauk SSSR 27(2):543–547

    Google Scholar 

  40. Li Q, Zhou Y, Liang Y, Varshney P (2017) Convergence analysis of proximal gradient with momentum for nonconvex optimization. In: Proceedings of the 34th international conference on machine learning, pp 2111–2119

  41. Ma S, Goldfarb D, Chen L (2011) Fixed point and Bregman iterative methods for matrix rank minimization. Math Progr 128(1–2):321–353

    Article  MathSciNet  Google Scholar 

  42. Thu Q, Ghanbari M (2008) Scope of validity of PSNR in image/video quality assessment. Electron Lett 44(13):800–801

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported in part by the Natural Science Foundation of China under Grant 11901476, and in part by the Fundamental Research Funds for the Central Universities under Grant SWU120036.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Zhi Wang or Wu Chen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Hu, D., Luo, X. et al. Performance guarantees of transformed Schatten-1 regularization for exact low-rank matrix recovery. Int. J. Mach. Learn. & Cyber. 12, 3379–3395 (2021). https://doi.org/10.1007/s13042-021-01361-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-021-01361-1

Keywords

Navigation