Skip to main content
Log in

New Applications of Matrix Methods

  • GENERAL NUMERICAL METHODS
  • Published:
Computational Mathematics and Mathematical Physics Aims and scope Submit manuscript

Abstract

Modern directions in the development of matrix methods and their applications described in the present issue are overviewed. Special attention is given to methods associated with separation of variables, special decompositions of matrices and tensors implementing this technique, related algorithms, and their applications to multidimensional problems in computational mathematics, data analysis, and machine learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

REFERENCES

  1. M. Udell and A. Townsend, “Why are big data matrices approximately low rank?” SIAM J. Math. Data Sci. 1 (1), 144–160 (2019).

    Article  MathSciNet  Google Scholar 

  2. B. Beckermann and A. Townsend, “On the singular values of matrices with displacement structure,” SIAM J. Matrix Anal. Appl. 38 (4), 1227–1248 (2017).

    Article  MathSciNet  Google Scholar 

  3. A. Townsend and H. Wilber, “Near-optimal column-based matrix reconstruction,” Linear Algebra Appl. 548, 19–41 (2018).

    Article  MathSciNet  Google Scholar 

  4. S. Goreinov, E. Tyrtyshnikov, and N. Zamarashkin, “A theory of pseudoskeleton approximations,” Linear Algebra Appl. 261 (1–3), 19–41 (1997).

    Article  MathSciNet  Google Scholar 

  5. S. Goreinov and E. Tyrtyshnikov, “The maximal-volume concept in approximation by low-rank matrices,” Contemp. Math. 280, 47–52 (2001).

    Article  MathSciNet  Google Scholar 

  6. I. Oseledets and E. Tyrtyshnikov, “TT-cross approximation for multidimensional arrays,” Linear Algebra Appl. 432, 70–88 (2010).

    Article  MathSciNet  Google Scholar 

  7. L. I. Vysotsky, “TT ranks of approximate tensorizations of some smooth functions,” Comput. Math. Math. Phys. 61 (5), 750–760 (2021).

  8. C. Boutsidis, P. Drienas, and M. Magdon-Ismail, “Near-optimal column-based matrix reconstruction,” SIAM J. Comput. 43 (2), 183–202 (2014).

    Article  MathSciNet  Google Scholar 

  9. C. Boutsidis and D. P. Woodruff, “Optimal CUR matrix decompositions,” Proceedings of the 46th Annual ACM Symposium on Theory of Computing (2014), pp. 353–362.

  10. A. Deshpande and L. Rademacher, “Efficient volume sampling for row/column subset selection,” IEEE 51st Annual Symposium on Foundations of Computer Science (2010), pp. 329–338.

  11. N. L. Zamarashkin and A. I. Osinsky, “On the accuracy of cross and column low-rank Maxvol approximations in average,” Comput. Math. Math. Phys. 61 (5), 786–798 (2021).

  12. E. J. Candes and T. Tao, “The power of convex relaxation: Near-optimal matrix completion,” IEEE Trans. Inf. Theory 56 (5), 2053–2080 (2009).

    Article  MathSciNet  Google Scholar 

  13. B. Recht, “A simpler approach to matrix completion,” J. Mach. Learn. Res. 12, 3413–3430 (2011).

    MathSciNet  MATH  Google Scholar 

  14. J.-F. Cai, E. J. Candes, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. Optim. 20 (4), 1956–1982 (2010).

    Article  MathSciNet  Google Scholar 

  15. R. Meka, P. Jain, and I. S. Dhillon, “Guaranteed rank minimization via singular value projection,” Proceedings of the 23rd International Conference on Neural Information Processing Systems (2010), Vol. 1, No. 1–3, pp. 937–945.

  16. O. S. Lebedeva, A. I. Osinsky, and S. V. Petrov, “Low-rank approximation algorithms in the matrix completion problem on a random stencil,” Comput. Math. Math. Phys. 61 (5), 799–815 (2021).

  17. A. Osinsky and N. Zamarashkin, “Pseudoskeleton approximations with better accuracy estimates,” Linear Algebra Appl. 537, 221–249 (2018).

    Article  MathSciNet  Google Scholar 

  18. J. A. Tropp, N. Halko, and P. G. Martinsson, “Finding structures with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM Rev. 53 (2), 217–288 (2011).

    Article  MathSciNet  Google Scholar 

  19. Y. Guo, “Convex co-embedding for matrix completion with predictive side information,” Proceedings of the 31th AAAI Conference on Artificial Intelligence (2017).

  20. H. Wang, Y. Wei, M. Cao, M. Xu, W. Wu, and E. P. Xing, “Deep inductive matrix completion for biomedical interaction prediction,” IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (2019), pp. 520–527.

  21. M. Xu, R. Jin, and Z.-H. Zhou, “Speedup matrix completion with side information: Application to multi-label learning,” Advances in Neural Information Processing Systems (2013), pp. 2301–2309.

  22. M. Burkina, I. Nazarov, M. Panov, G. Fedonin, and B. Shirokikh, “Inductive matrix completion with selection of features,” Comput. Math. Math. Phys. 61 (5), 719–732 (2021).

  23. Yu. V. Gusak, T. K. Daulbaev, E. S. Ponomarev, A. S. Cichocki, and I. V. Oseledets, “Reduced-order modeling of deep neural networks,” Comput. Math. Math. Phys. 61 (5), 774–785 (2021).

  24. I. V. Oseledets and P. V. Kharyuk, “Structuring data with block term decomposition: Decomposition of joint tensors and variational block term decomposition as a parametrized mixture distribution model,” Comput. Math. Math. Phys. 61 (5), 816–835 (2021).

  25. A. B. Samokhin and E. E. Tyrtyshnikov, “Numerical method for solving volume integral equations on a nonuniform grid,” Comput. Math. Math. Phys. 61 (5), 847–853 (2021).

  26. E. Bozzo, P. Deidda, and C. Di Fiore, “Algebras closed by J-hermitianity in displacement formulas,” Comput. Math. Math. Phys. 61 (5), 674–683 (2021).

  27. M. A. Botchev, “An accurate restarting for shift-and-invert Krylov subspaces computing matrix exponential actions of nonsymmetric matrices,” Comput. Math. Math. Phys. 61 (5), 684–698 (2021).

  28. P. Van Dooren, T. Laudadio, and N. Mastronardi, “Computing the eigenvectors of nonsymmetric tridiagonal matrices,” Comput. Math. Math. Phys. 61 (5), 733–749 (2021).

  29. W. Gander, “New algorithms for solving nonlinear eigenvalue problems,” Comput. Math. Math. Phys. 61 (5), 761–773 (2021).

  30. C. Brezinski and M. Redivo-Zaglia, “A survey of Shanks’ extrapolation methods and their applications,” Comput. Math. Math. Phys. 61 (5), 699–718 (2021).

  31. A. I. Boyko, I. V. Oseledets, and G. Ferrer, “TT-QI: Faster value iteration in tensor train format for stochastic optimal control,” Comput. Math. Math. Phys. 61 (5), 836–846 (2021).

  32. V. Kh. Khoromskaia and B. N. Khoromskij, “Prospects of tensor-based numerical modeling of collective electrostatics in many-particle systems,” Comput. Math. Math. Phys. 61 (5), 864–886 (2021).

Download references

ACKNOWLEDGMENTS

This paper’s authors, who are simultaneously the editors of this themed issue, are sincerely grateful to Sergey Aleksandrovich Matveev, who did most of the work in preparing this issue.

Funding

This work was supported by the Moscow Center for Fundamental and Applied Mathematics (contract no. 075-15-2019-1624 with the Ministry of Education and Science of the Russian Federation).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to N. L. Zamarashkin, I. V. Oseledets or E. E. Tyrtyshnikov.

Additional information

Translated by I. Ruzanova

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zamarashkin, N.L., Oseledets, I.V. & Tyrtyshnikov, E.E. New Applications of Matrix Methods. Comput. Math. and Math. Phys. 61, 669–673 (2021). https://doi.org/10.1134/S0965542521050183

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0965542521050183

Keywords:

Navigation