Skip to main content
Log in

Low-rank factorization for rank minimization with nonconvex regularizers

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Rank minimization is of interest in machine learning applications such as recommender systems and robust principal component analysis. Minimizing the convex relaxation to the rank minimization problem, the nuclear norm, is an effective technique to solve the problem with strong performance guarantees. However, nonconvex relaxations have less estimation bias than the nuclear norm and can more accurately reduce the effect of noise on the measurements. We develop efficient algorithms based on iteratively reweighted nuclear norm schemes, while also utilizing the low rank factorization for semidefinite programs put forth by Burer and Monteiro. We prove convergence and computationally show the advantages over convex relaxations and alternating minimization methods. Additionally, the computational complexity of each iteration of our algorithm is on par with other state of the art algorithms, allowing us to quickly find solutions to the rank minimization problem for large matrices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Movielens. https://grouplens.org/datasets/movielens/. Accessed: 2019-11-21

  2. Andrew, A., Chu, K., Lancaster, P.: Derivatives of eigenvalues and eigenvectors of matrix functions. SIAM J. Matrix Anal. Appl. 14(4), 903–926 (1993). https://doi.org/10.1137/0614061

    Article  MathSciNet  MATH  Google Scholar 

  3. Burer, S., Monteiro, R.: A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Program. 95(2), 329–357 (2003). https://doi.org/10.1007/s10107-002-0352-8

    Article  MathSciNet  MATH  Google Scholar 

  4. C. Lu J. Tang, S.Y., Lin, Z.: Generalized nonconvex nonsmooth low-rank minimization. Proceedings of the IEEE computer society conference on computer vision and pattern recognition (2014). https://doi.org/10.1109/CVPR.2014.526

  5. Candès, E., Tao, T.: The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theor. 56(5), 2053–2080 (2010). https://doi.org/10.1109/TIT.2010.2044061

    Article  MathSciNet  MATH  Google Scholar 

  6. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)

    Article  MathSciNet  Google Scholar 

  7. Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. Proceedings of the 2003 American Control Conference, 2003. 3, 2156–2162 vol.3 (2003)

  8. Geman, D.: Chengda Yang: nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 4(7), 932–946 (1995)

    Article  Google Scholar 

  9. Goldberg, K., Roeder, T., Gupta, D., Perkins, C.: Eigentaste: a constant time collaborative filtering algorithm. Inf. Retr. 4(2), 133–151 (2001). https://doi.org/10.1023/A:1011419012209

    Article  MATH  Google Scholar 

  10. Hoffman, A.J., Wielandt, H.W.: The variation of the spectrum of a normal matrix. Duke Math. J. 20(1), 37–39 (1953). https://doi.org/10.1215/S0012-7094-53-02004-3

    Article  MathSciNet  MATH  Google Scholar 

  11. Cai, J., Candes, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MathSciNet  Google Scholar 

  12. Lai, M.J., Xu, Y., Yin, W.: Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J. Num. Anal. 51(2), 927–957 (2013). https://doi.org/10.1137/110840364

    Article  MATH  Google Scholar 

  13. Li, Q., Qi, Hd.: A sequential semismooth newton method for the nearest low-rank correlation matrix problem. SIAM J. Optim. 21(4), 1641–1666 (2011)

    Article  MathSciNet  Google Scholar 

  14. Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31(3), 1235–1256 (2010). https://doi.org/10.1137/090755436

    Article  MathSciNet  MATH  Google Scholar 

  15. Lu, C., Zhu, C., Xu, C., Yan, S., Lin, Z.: Generalized singular value thresholding. arXiv abs/1412.2231 (2014). arXiv: 1412.2231

  16. Magnus, J.: On differentiating eigenvalues and eigenvectors. Econo. Theory 1(2), 179–191 (1985). https://doi.org/10.1017/s0266466600011129

    Article  Google Scholar 

  17. Mohan, K., Fazel, M.: Iterative reweighted least squares for matrix rank minimization. 2010 48th Annual Allerton Conference on communication, control and computing (Allerton) (2010).https://doi.org/10.1109/allerton.2010.5706969

  18. Rennie, J.D.M., Srebro, N.: Fast maximum margin matrix factorization for collaborative prediction. In: Proceedings of the 22nd International Conference on machine learning, ICML’05, p. 713–719. Association for Computing Machinery, New York, NY, USA (2005). https://doi.org/10.1145/1102351.1102441

  19. Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128, 321–353 (2009)

    Article  MathSciNet  Google Scholar 

  20. Sagan, A., Shen, X., Mitchell, J.E.: Two relaxation methods for rank minimization problems. J. Optim. Theory Appl. 186(3), 806–825 (2020). https://doi.org/10.1007/s10957-020-01731-

    Article  MathSciNet  MATH  Google Scholar 

  21. Shen, X., Mitchell, J.: A penalty method for rank minimization problems in symmetric matrices. Comput. Optim. Appl. 71(2), 353–380 (2018). https://doi.org/10.1007/s10589-018-0010-6

    Article  MathSciNet  MATH  Google Scholar 

  22. Srebro, N., Rennie, J.D.M., Jaakkola, T.S.: Maximum-margin matrix factorization. In: Proceedings of the 17th International Conference on neural information processing systems, NIPS’04, p. 1329–1336. MIT Press, Cambridge, MA, USA (2004)

  23. Hastie, T., Mazumder, R., Lee, J.D., Zadeh, R.: Matrix completion and low-rank svd via fast alternating least squares. J. Mach. Learn. Res. 16(104), 3367–3402 (2015)

    MathSciNet  MATH  Google Scholar 

  24. Tanner, J., Wei, K.: Low rank matrix completion by alternating steepest descent methods. Appl. Comput. Harmonic Anal. (2015). https://doi.org/10.1016/j.acha.2015.08.003

    Article  MATH  Google Scholar 

  25. Tasissa, A., Lai, R.: Exact reconstruction of euclidean distance geometry problem using low-rank matrix completion. IEEE Trans. Inform. Theory 65(5), 3124–3144 (2019). https://doi.org/10.1109/tit.2018.2881749

    Article  MathSciNet  MATH  Google Scholar 

  26. Trzasko, J., Manduca, A.: Highly undersampled magnetic resonance image reconstruction via homotopic \(\ell _{0}\) -minimization. IEEE Trans. Med. Imag. 28(1), 106–121 (2009)

    Article  Google Scholar 

  27. Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4(4), 333–361 (2012). https://doi.org/10.1007/s12532-012-0044-1

    Article  MathSciNet  MATH  Google Scholar 

  28. Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imag. Sci. 6(3), 1758–1789 (2013). https://doi.org/10.1137/120887795

    Article  MathSciNet  MATH  Google Scholar 

  29. Lou, Y., Yin, P., Xin, J.: Point source super-resolution via non-convex \(l_1\) based methods. J. Sci. Comput. 68(3), 1082–1100 (2016)

    Article  MathSciNet  Google Scholar 

  30. Yao, Q., Kwok, J., Zhong, W.: Fast low-rank matrix learning with nonconvex regularization. 2015 IEEE International conference on data mining (2015). https://doi.org/10.1109/icdm.2015.9

  31. Yao, Q., Kwok, J.T., Gao, F., Chen, W., Liu, T.Y.: Efficient inexact proximal gradient algorithm for nonconvex problems. Proceedings of the Twenty-Sixth International Joint Conference on artificial intelligence (2017). https://doi.org/10.24963/ijcai.2017/462

  32. Yao, Q., Kwok, J.T., Wang, T., Liu, T.: Large-scale low-rank matrix learning with nonconvex regularizers. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2628–2643 (2019). https://doi.org/10.1109/TPAMI.2018.2858249

    Article  Google Scholar 

  33. Zhang, C.H.: Nearly unbiased variable selection under minimax concave penalty. Annal. Stat. 38(2), 894–942 (2010)

    Article  MathSciNet  Google Scholar 

  34. Zhang, D., Hu, Y., Ye, J., Li, X., He, X.: Matrix completion by truncated nuclear norm regularization. 2012 IEEE Conference on computer vision and pattern recognition pp. 2192–2199 (2012). https://doi.org/10.1109/CVPR.2012.6247927

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to April Sagan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by National Science Foundation under Grant Number DMS-1736326.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sagan, A., Mitchell, J.E. Low-rank factorization for rank minimization with nonconvex regularizers. Comput Optim Appl 79, 273–300 (2021). https://doi.org/10.1007/s10589-021-00276-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-021-00276-5

Keywords

Navigation