样式: 排序: IF: - GO 导出 标记为已读
-
On the Polyak momentum variants of the greedy deterministic single and multiple row‐action methods Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-03-14 Nian‐Ci Wu, Qian Zuo, Yatian Wang
For solving a consistent system of linear equations, the classical row‐action method, such as Kaczmarz method, is a simple while really effective iteration solver. Based on the greedy index selection strategy and Polyak's heavy‐ball momentum acceleration technique, we propose two deterministic row‐action methods and establish the corresponding convergence theory. We show that our algorithm can linearly
-
Total positivity and least squares problems in the Lagrange basis Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-03-09 Ana Marco, José‐Javier Martínez, Raquel Viaña
SummaryThe problem of polynomial least squares fitting in the standard Lagrange basis is addressed in this work. Although the matrices involved in the corresponding overdetermined linear systems are not totally positive, rectangular totally positive Lagrange‐Vandermonde matrices are used to take advantage of total positivity in the construction of accurate algorithms to solve the considered problem
-
Some preconditioning techniques for a class of double saddle point problems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-02-22 Fariba Balani Bakrani, Luca Bergamaschi, Ángeles Martínez, Masoud Hajarian
In this paper, we describe and analyze the spectral properties of several exact block preconditioners for a class of double saddle point problems. Among all these, we consider an inexact version of a block triangular preconditioner providing extremely fast convergence of the (F)GMRES method. We develop a spectral analysis of the preconditioned matrix showing that the complex eigenvalues lie in a circle
-
Total positivity and high relative accuracy for several classes of Hankel matrices Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-02-20 E. Mainar, J.M. Peña, B. Rubio
SummaryGramian matrices with respect to inner products defined for Hilbert spaces supported on bounded and unbounded intervals are represented through a bidiagonal factorization. It is proved that the considered matrices are strictly totally positive Hankel matrices and their catalecticant determinants are also calculated. Using the proposed representation, the numerical resolution of linear algebra
-
Preconditioned discontinuous Galerkin method and convection-diffusion-reaction problems with guaranteed bounds to resulting spectra Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-02-08 Liya Gaynutdinova, Martin Ladecký, Ivana Pultarová, Miloslav Vlasák, Jan Zeman
This paper focuses on the design, analysis and implementation of a new preconditioning concept for linear second order partial differential equations, including the convection-diffusion-reaction problems discretized by Galerkin or discontinuous Galerkin methods. We expand on the approach introduced by Gergelits et al. and adapt it to the more general settings, assuming that both the original and preconditioning
-
Normalized Newton method to solve generalized tensor eigenvalue problems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-01-09 Mehri Pakmanesh, Hamidreza Afshin, Masoud Hajarian
The problem of generalized tensor eigenvalue is the focus of this paper. To solve the problem, we suggest using the normalized Newton generalized eigenproblem approach (NNGEM). Since the rate of convergence of the spectral gradient projection method (SGP), the generalized eigenproblem adaptive power (GEAP), and other approaches is only linear, they are significantly improved by our proposed method
-
Matrix-less methods for the spectral approximation of large non-Hermitian Toeplitz matrices: A concise theoretical analysis and a numerical study Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-01-04 Manuel Bogoya, Sven-Erik Ekström, Stefano Serra-Capizzano, Paris Vassalos
It is known that the generating function of a sequence of Toeplitz matrices may not describe the asymptotic distribution of the eigenvalues of the considered matrix sequence in the non-Hermitian setting. In a recent work, under the assumption that the eigenvalues are real, admitting an asymptotic expansion whose first term is the distribution function, fast algorithms computing all the spectra were
-
Practical sketching-based randomized tensor ring decomposition Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2024-01-02 Yajie Yu, Hanyu Li
Based on sketching techniques, we propose two practical randomized algorithms for tensor ring (TR) decomposition. Specifically, on the basis of defining new tensor products and investigating their properties, the two algorithms are devised by applying the Kronecker sub-sampled randomized Fourier transform and TensorSketch to the alternating least squares subproblems derived from the minimization problem
-
Robust block diagonal preconditioners for poroelastic problems with strongly heterogeneous material Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-12-26 Tomáš Luber, Stanislav Sysala
This paper focuses on the analysis and the solution of the saddle-point problem arising from a three-field formulation of Biot's model of poroelasticity, discretized in time by the implicit Euler method. A block diagonal-preconditioner, based on the Schur complement, is analyzed on a functional level and compared with two other block-diagonal preconditioners having a similar structure. The problem
-
Generalizing reduction-based algebraic multigrid Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-12-18 Tareq Zaman, Nicolas Nytko, Ali Taghibakhshi, Scott MacLachlan, Luke Olson, Matthew West
Algebraic multigrid (AMG) methods are often robust and effective solvers for solving the large and sparse linear systems that arise from discretized PDEs and other problems, relying on heuristic graph algorithms to achieve their performance. Reduction-based AMG (AMGr) algorithms attempt to formalize these heuristics by providing two-level convergence bounds that depend concretely on properties of the
-
An iterative algorithm for low-rank tensor completion problem with sparse noise and missing values Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-12-17 Jianheng Chen, Wen Huang
Robust low-rank tensor completion plays an important role in multidimensional data analysis against different degradations, such as sparse noise, and missing entries, and has a variety of applications in image processing and computer vision. In this paper, an optimization model for low-rank tensor completion problems is proposed and a block coordinate descent algorithm is developed to solve this model
-
Practical alternating least squares for tensor ring decomposition Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-12-10 Yajie Yu, Hanyu Li
Tensor ring (TR) decomposition has been widely applied as an effective approach in a variety of applications to discover the hidden low-rank patterns in multidimensional and higher-order data. A well-known method for TR decomposition is the alternating least squares (ALS). However, solving the ALS subproblems often suffers from high cost issue, especially for large-scale tensors. In this paper, we
-
Preconditioned weighted full orothogonalization method for solving singular linear systems from PageRank problems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-11-10 Zhao-Li Shen, Bruno Carpentieri, Chun Wen, Jian-Jun Wang, Stefano Serra-Capizzano, Shi-Ping Du
The PageRank model, which was first proposed by Google for its web search engine application, has since become a popular computational tool in a wide range of scientific fields, including chemistry, bioinformatics, neuroscience, bibliometrics, social networks, and others. PageRank calculations necessitate the use of fast computational techniques with low algorithmic and memory complexity. In recent
-
Numerical methods for rectangular multiparameter eigenvalue problems, with applications to finding optimal ARMA and LTI models Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-11-09 Michiel E. Hochstenbach, Tomaž Košir, Bor Plestenjak
Standard multiparameter eigenvalue problems (MEPs) are systems of k ≥ 2 $$ k\ge 2 $$ linear k $$ k $$ -parameter square matrix pencils. Recently, a new form of multiparameter eigenvalue problems has emerged: a rectangular MEP (RMEP) with only one multivariate rectangular matrix pencil, where we are looking for combinations of the parameters for which the rank of the pencil is not full. Applications
-
Bringing physics into the coarse-grid selection: Approximate diffusion distance/effective resistance measures for network analysis and algebraic multigrid for graph Laplacians and systems of elliptic partial differential equations Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-11-02 Barry Lee
In a recent paper, the author examined a correlation affinity measure for selecting the coarse degrees of freedom (CDOFs) or coarse nodes (C nodes) in systems of elliptic partial differential equations (PDEs). This measure was applied to a set of relaxed vectors, which exposed the near-nullspace components of the PDE operator. Selecting the CDOFs using this affinity measure and constructing the interpolation
-
Shifted LOPBiCG: A locally orthogonal product-type method for solving nonsymmetric shifted linear systems based on Bi-CGSTAB Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-10-30 Ren-Jie Zhao, Tomohiro Sogabe, Tomoya Kemmochi, Shao-Liang Zhang
When solving shifted linear systems using shifted Krylov subspace methods, selecting a seed system is necessary, and an unsuitable seed may result in many shifted systems being unsolved. To avoid this problem, a seed-switching technique has been proposed to help switch the seed system to another linear system as a new seed system without losing the dimension of the constructed Krylov subspace. Nevertheless
-
Algebra preconditionings for 2D Riesz distributed-order space-fractional diffusion equations on convex domains Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-10-23 Mariarosa Mazza, Stefano Serra-Capizzano, Rosita Luisa Sormani
When dealing with the discretization of differential equations on non-rectangular domains, a careful treatment of the boundary is mandatory and may result in implementation difficulties and in coefficient matrices without a prescribed structure. Here we examine the numerical solution of a two-dimensional constant coefficient distributed-order space-fractional diffusion equation with a nonlinear term
-
Quasi-Newton variable preconditioning for nonlinear elasticity systems in 3D Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-10-23 J. Karátson, S. Sysala, M. Béreš
Quasi-Newton iterations are constructed for the finite element solution of small-strain nonlinear elasticity systems in 3D. The linearizations are based on spectral equivalence and hence considered as variable preconditioners arising from proper simplifications in the differential operator. Convergence is proved, providing bounds uniformly w.r.t. the FEM discretization. Convenient iterative solvers
-
A tensor bidiagonalization method for higher-order singular value decomposition with applications Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-10-01 A. El Hachimi, K. Jbilou, A. Ratnani, L. Reichel
The need to know a few singular triplets associated with the largest singular values of a third-order tensor arises in data compression and extraction. This paper describes a new method for their computation using the t-product. Methods for determining a couple of singular triplets associated with the smallest singular values also are presented. The proposed methods generalize available restarted Lanczos
-
Computing the completely positive factorization via alternating minimization Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-09-28 R. Behling, H. Lara, H. Oviedo
In this article, we propose a novel alternating minimization scheme for finding completely positive factorizations. In each iteration, our method splits the original factorization problem into two optimization subproblems, the first one being a orthogonal procrustes problem, which is taken over the orthogoal group, and the second one over the set of entrywise positive matrices. We present both a convergence
-
Conditioning of hybrid variational data assimilation Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-09-26 Shaerdan Shataer, Amos S. Lawless, Nancy K. Nichols
In variational assimilation, the most probable state of a dynamical system under Gaussian assumptions for the prior and likelihood can be found by solving a least-squares minimization problem. In recent years, we have seen the popularity of hybrid variational data assimilation methods for Numerical Weather Prediction. In these methods, the prior error covariance matrix is a weighted sum of a climatological
-
A family of inertial-based derivative-free projection methods with a correction step for constrained nonlinear equations and their applications Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-09-22 Pengjie Liu, Hu Shao, Zihang Yuan, Jianhao Zhou
Numerous attempts have been made to develop efficient methods for solving the system of constrained nonlinear equations due to its widespread use in diverse engineering applications. In this article, we present a family of inertial-based derivative-free projection methods with a correction step for solving such system, in which the selection of the derivative-free search direction is flexible. This
-
Structured matrix recovery from matrix-vector products Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-09-22 Diana Halikias, Alex Townsend
Can one recover a matrix efficiently from only matrix-vector products? If so, how many are needed? This article describes algorithms to recover matrices with known structures, such as tridiagonal, Toeplitz, Toeplitz-like, and hierarchical low-rank, from matrix-vector products. In particular, we derive a randomized algorithm for recovering an N × N $$ N\times N $$ unknown hierarchical low-rank matrix
-
Stage-parallel preconditioners for implicit Runge–Kutta methods of arbitrarily high order, linear problems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-09-19 Owe Axelsson, Ivo Dravins, Maya Neytcheva
Fully implicit Runge–Kutta methods offer the possibility to use high order accurate time discretization to match space discretization accuracy, an issue of significant importance for many large scale problems of current interest, where we may have fine space resolution with many millions of spatial degrees of freedom and long time intervals. In this work, we consider strongly A-stable implicit Runge–Kutta
-
Accurate bidiagonal decomposition of Lagrange–Vandermonde matrices and applications Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-08-12 Ana Marco, José-Javier Martínez, Raquel Viaña
Lagrange–Vandermonde matrices are the collocation matrices corresponding to Lagrange-type bases, obtained by removing the denominators from each element of a Lagrange basis. It is proved that, provided the nodes required to create the Lagrange-type basis and the corresponding collocation matrix are properly ordered, such matrices are strictly totally positive. A fast algorithm to compute the bidiagonal
-
Low-rank updates of matrix square roots Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-08-09 Shany Shmueli, Petros Drineas, Haim Avron
Models in which the covariance matrix has the structure of a sparse matrix plus a low rank perturbation are ubiquitous in data science applications. It is often desirable for algorithms to take advantage of such structures, avoiding costly matrix computations that often require cubic time and quadratic storage. This is often accomplished by performing operations that maintain such structures, for example
-
Impact of correlated observation errors on the conditioning of variational data assimilation problems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-08-09 Olivier Goux, Selime Gürol, Anthony T. Weaver, Youssef Diouane, Oliver Guillet
An important class of nonlinear weighted least-squares problems arises from the assimilation of observations in atmospheric and ocean models. In variational data assimilation, inverse error covariance matrices define the weighting matrices of the least-squares problem. For observation errors, a diagonal matrix (i.e., uncorrelated errors) is often assumed for simplicity even when observation errors
-
Rank-structured approximation of some Cauchy matrices with sublinear complexity Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-08-07 Mikhail Lepilov, Jianlin Xia
In this article, we consider the rank-structured approximation of one important type of Cauchy matrix. This approximation plays a key role in some structured matrix methods such as stable and efficient direct solvers and other algorithms for Toeplitz matrices and certain kernel matrices. Previous rank-structured approximations (specifically hierarchically semiseparable, or HSS, approximations) for
-
Volume-based subset selection Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-07-31 Alexander Osinsky
This paper provides a fast algorithm for the search of a dominant (locally maximum volume) submatrix, generalizing the existing algorithms from n ⩽ r $$ n\leqslant r $$ to n > r $$ n>r $$ submatrix columns, where r $$ r $$ is the number of searched rows. We prove the bound on the number of steps of the algorithm, which allows it to outperform the existing subset selection algorithms in either the bounds
-
A new deflation criterion for the QZ algorithm Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-07-24 Thijs Steel, Raf Vandebril, Julien Langou
The QZ algorithm computes the generalized Schur form of a matrix pencil. It is an iterative algorithm and, at some point, it must decide when to deflate, that is when a generalized eigenvalue has converged and to move on to another one. Choosing a deflation criterion that makes this decision is nontrivial. If it is too strict, the algorithm might waste iterations on already converged eigenvalues. If
-
Why diffusion-based preconditioning of Richards equation works: Spectral analysis and computational experiments at very large scale Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-07-19 Daniele Bertaccini, Pasqua D'Ambra, Fabio Durastante, Salvatore Filippone
We consider here a cell-centered finite difference approximation of the Richards equation in three dimensions, averaging for interface values the hydraulic conductivity K = K ( p ) $$ K=K(p) $$ , a highly nonlinear function, by arithmetic, upstream and harmonic means. The nonlinearities in the equation can lead to changes in soil conductivity over several orders of magnitude and discretizations with
-
Total positivity and accurate computations with Gram matrices of Said-Ball bases Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-07-13 E. Mainar, J. M. Peña, B. Rubio
In this article, it is proved that Gram matrices of totally positive bases of the space of polynomials of a given degree on a compact interval are totally positive. Conditions to guarantee computations to high relative accuracy with those matrices are also obtained. Furthermore, a fast and accurate algorithm to compute the bidiagonal factorization of Gram matrices of the Said-Ball bases is obtained
-
Nonlinear approximation of functions based on nonnegative least squares solver Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-07-10 Petr N. Vabishchevich
In computational practice, most attention is paid to rational approximations of functions and approximations by the sum of exponents. We consider a wide enough class of nonlinear approximations characterized by a set of two required parameters. The approximating function is linear in the first parameter; these parameters are assumed to be positive. The individual terms of the approximating function
-
Tensor train completion: Local recovery guarantees via Riemannian optimization Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-07-06 Stanislav Budzinskiy, Nikolai Zamarashkin
In this work, we estimate the number of randomly selected elements of a tensor that with high probability guarantees local convergence of Riemannian gradient descent for tensor train completion. We derive a new bound for the orthogonal projections onto the tangent spaces based on the harmonic mean of the unfoldings' singular values and introduce a notion of core coherence for tensor trains. We also
-
Data-driven linear complexity low-rank approximation of general kernel matrices: A geometric approach Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-07-04 Difeng Cai, Edmond Chow, Yuanzhe Xi
A general, rectangular kernel matrix may be defined as K i j = κ ( x i , y j ) $$ {K}_{ij}=\kappa \left({x}_i,{y}_j\right) $$ where κ ( x , y ) $$ \kappa \left(x,y\right) $$ is a kernel function and where X = { x i } i = 1 m $$ X={\left\{{x}_i\right\}}_{i=1}^m $$ and Y = { y i } i = 1 n $$ Y={\left\{{y}_i\right\}}_{i=1}^n $$ are two sets of points. In this paper, we seek a low-rank approximation to
-
Optimal polynomial smoothers for multigrid V-cycles Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-29 James Lottes
The idea of using polynomial methods to improve simple smoother iterations within a multigrid method for a symmetric positive definite system is revisited. A two-level bound going back to Hackbusch is optimized by a very simple iteration, a close cousin of the Chebyshev semi-iterative method, but based on the Chebyshev polynomials of the fourth instead of first kind. A full V-cycle bound for general
-
Blockwise acceleration of alternating least squares for canonical tensor decomposition Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-21 David Evans, Nan Ye
The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well-known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated
-
The generalized residual cutting method and its convergence characteristics Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-20 T. Abe, A. T. Chronopoulos
Iterative methods and especially Krylov subspace methods (KSM) are a very useful numerical tool in solving for large and sparse linear systems problems arising in science and engineering modeling. More recently, the nested loop KSM have been proposed that improve the convergence of the traditional KSM. In this article, we review the residual cutting (RC) and the generalized residual cutting (GRC) that
-
A two-step matrix splitting iteration paradigm based on one single splitting for solving systems of linear equations Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-11 Zhong-Zhi Bai
For solving large sparse systems of linear equations, we construct a paradigm of two-step matrix splitting iteration methods and analyze its convergence property for the nonsingular and the positive-definite matrix class. This two-step matrix splitting iteration paradigm adopts only one single splitting of the coefficient matrix, together with several arbitrary iteration parameters. Hence, it can be
-
A Vanka-based parameter-robust multigrid relaxation for the Stokes–Darcy Brinkman problems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-05 Yunhui He
We consider a block-structured multigrid method based on Braess–Sarazin relaxation for solving the Stokes–Darcy Brinkman equations discretized by the marker and cell scheme. In the relaxation scheme, an element-based additive Vanka operator is used to approximate the inverse of the corresponding shifted Laplacian operator involved in the discrete Stokes–Darcy Brinkman system. Using local Fourier analysis
-
CP decomposition for tensors via alternating least squares with QR decomposition Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-05 Rachel Minster, Irina Viviano, Xiaotian Liu, Grey Ballard
The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low-rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which
-
Constructing the field of values of decomposable and general matrices using the ZNN based path following method Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-02 Frank Uhlig
This paper describes and develops a fast and accurate path following algorithm that computes the field of values boundary curve ∂ F ( A ) $$ \partial F(A) $$ for every conceivable complex or real square matrix A $$ A $$ . It relies on the matrix flow decomposition algorithm that finds a proper block-diagonal flow representation for the associated hermitean matrix flow ℱ A ( t ) = cos ( t ) H + sin
-
Efficient algorithms for computing rank-revealing factorizations on a GPU Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-06-02 Nathan Heavner, Chao Chen, Abinand Gopal, Per-Gunnar Martinsson
Standard rank-revealing factorizations such as the singular value decomposition (SVD) and column pivoted QR factorization are challenging to implement efficiently on a GPU. A major difficulty in this regard is the inability of standard algorithms to cast most operations in terms of the Level-3 BLAS. This article presents two alternative algorithms for computing a rank-revealing factorization of the
-
Convergence acceleration of preconditioned conjugate gradient solver based on error vector sampling for a sequence of linear systems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-05-31 Takeshi Iwashita, Kota Ikehara, Takeshi Fukaya, Takeshi Mifune
In this article, we focus on solving a sequence of linear systems that have identical (or similar) coefficient matrices. For this type of problem, we investigate subspace correction (SC) and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the
-
Multilevel-in-width training for deep neural network regression Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-05-19 Colin Ponce, Ruipeng Li, Christina Mao, Panayot Vassilevski
A common challenge in regression is that for many problems, the degrees of freedom required for a high-quality solution also allows for overfitting. Regularization is a class of strategies that seek to restrict the range of possible solutions so as to discourage overfitting while still enabling good solutions, and different regularization strategies impose different types of restrictions. In this paper
-
Preconditioned tensor format conjugate gradient squared and biconjugate gradient stabilized methods for solving stein tensor equations Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-05-10 Yuhan Chen, Chenliang Li
This article is concerned with solving the high order Stein tensor equation arising in control theory. The conjugate gradient squared (CGS) method and the biconjugate gradient stabilized (BiCGSTAB) method are attractive methods for solving linear systems. Compared with the large-scale matrix equation, the equivalent tensor equation needs less storage space and computational costs. Therefore, we present
-
A conforming auxiliary space preconditioner for the mass conserving stress-yielding method Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-05-07 Lukas Kogler, Philip L. Lederer, Joachim Schöberl
We are studying the efficient solution of the system of linear equations stemming from the mass conserving stress-yielding (MCS) discretization of the Stokes equations. We perform static condensation to arrive at a system for the pressure and velocity unknowns. An auxiliary space preconditioner for the positive definite velocity block makes use of efficient and scalable solvers for conforming Finite
-
A closed-form multigrid smoothing factor for an additive Vanka-type smoother applied to the Poisson equation Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-04-21 Chen Greif, Yunhui He
We consider an additive Vanka-type smoother for the Poisson equation discretized by the standard finite difference centered scheme. Using local Fourier analysis, we derive analytical formulas for the optimal smoothing factors for vertex-wise and element-wise Vanka smoothers. In one dimension the element-wise Vanka smoother is equivalent to the scaled mass operator obtained from the linear finite element
-
Anderson accelerated fixed-point iteration for multilinear PageRank Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-03-28 Fuqi Lai, Wen Li, Xiaofei Peng, Yannan Chen
In this paper, we apply the Anderson acceleration technique to the existing relaxation fixed-point iteration for solving the multilinear PageRank. In order to reduce computational cost, we further consider the periodical version of the Anderson acceleration. The convergence of the proposed algorithms is discussed. Numerical experiments on synthetic and real-world datasets are performed to demonstrate
-
Convergence analysis of a block preconditioned steepest descent eigensolver with implicit deflation Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-03-15 Ming Zhou, Zhaojun Bai, Yunfeng Cai, Klaus Neymeyr
Gradient-type iterative methods for solving Hermitian eigenvalue problems can be accelerated by using preconditioning and deflation techniques. A preconditioned steepest descent iteration with implicit deflation (PSD-id) is one of such methods. The convergence behavior of the PSD-id is recently investigated based on the pioneering work of Samokish on the preconditioned steepest descent method (PSD)
-
A block Cholesky-LU-based QR factorization for rectangular matrices Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-02-25 Sabine Le Borne
The Householder method provides a stable algorithm to compute the full QR factorization of a general matrix. The standard version of the algorithm uses a sequence of orthogonal reflections to transform the matrix into upper triangular form column by column. In order to exploit (level 3 BLAS or structured matrix) computational advantages for block-partitioned algorithms, we develop a block algorithm
-
Asymptotics for the eigenvalues of Toeplitz matrices with a symbol having a power singularity Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-02-21 Manuel Bogoya, Sergei M. Grudsky
The present work is devoted to the construction of an asymptotic expansion for the eigenvalues of a Toeplitz matrix T n ( a ) $$ {T}_n(a) $$ as n $$ n $$ goes to infinity, with a continuous and real-valued symbol a $$ a $$ having a power singularity of degree γ $$ \gamma $$ with 1 < γ < 2 $$ 1<\gamma <2 $$ , at one point. The resulting matrix is dense and its entries decrease slowly to zero when moving
-
Editorial: Tensor numerical methods and their application in scientific computing and data science Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-31 Boris N. Khoromskij, Venera Khoromskaia
Recent progress in understanding of rank-structured tensor decompositions in ℝ d $$ {\mathbb{R}}^d $$ and development of related tensor numerical methods enables efficient techniques for solution of the multidimensional problems in scientific computing and data science, avoiding the curse of dimensionality. The novel tensor numerical methods are based on the nonlinear rank-structured tensor repres
-
Structure preserving quaternion full orthogonalization method with applications Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-28 Tao Li, Qing-Wen Wang
This article proposes a structure-preserving quaternion full orthogonalization method (QFOM) for solving quaternion linear systems arising from color image restoration. The method is based on the quaternion Arnoldi procedure preserving the quaternion Hessenberg form. Combining with the preconditioning techniques, we further derive a variant of the QFOM for solving the linear systems, which can greatly
-
High relative accuracy with some special matrices related to Γ and β functions Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-23 Jorge Delgado, Juan Manuel Peña
For some families of totally positive matrices using Γ $$ \Gamma $$ and β $$ \beta $$ functions, we provide their bidiagonal factorization. Moreover, when these functions are defined over integers, we prove that the bidiagonal factorization can be computed with high relative accuracy and so we can compute with high relative accuracy their eigenvalues, singular values, inverses and the solutions of
-
Solution methods to the nearest rotation matrix problem in ℝ3: A comparative survey Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-21 Soheil Sarabandi, Federico Thomas
Nowadays, the singular value decomposition (SVD) is the standard method of choice for solving the nearest rotation matrix problem. Nevertheless, many other methods are available in the literature for the 3D case. This article reviews the most representative ones, proposes alternative ones, and presents a comparative analysis to elucidate their relative computational costs and error performances. This
-
A flexible block classical Gram–Schmidt skeleton with reorthogonalization Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-19 Qinmeng Zou
We investigate a variant of the reorthogonalized block classical Gram–Schmidt method for computing the QR factorization of a full column rank matrix. Our aim is to bound the loss of orthogonality even when the first local QR algorithm is only conditionally stable. In particular, this allows the use of modified Gram–Schmidt instead of Householder transformations as the first local QR algorithm. Numerical
-
A unified rational Krylov method for elliptic and parabolic fractional diffusion problems Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-17 Tobias Danczul, Clemens Hofreither, Joachim Schöberl
We present a unified framework to efficiently approximate solutions to fractional diffusion problems of stationary and parabolic type. After discretization, we can take the point of view that the solution is obtained by a matrix-vector product of the form f τ ( L ) b $$ {f}^{\boldsymbol{\tau}}(L)\mathbf{b} $$ , where L $$ L $$ is the discretization matrix of the spatial operator, b $$ \mathbf{b} $$
-
Nonconvex optimization for third-order tensor completion under wavelet transform Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-15 Quan Yu, Minru Bai
The main aim of this paper is to develop a nonconvex optimization model for third-order tensor completion under wavelet transform. On the one hand, through wavelet transform of frontal slices, we divide a large tensor data into a main part tensor and three detail part tensors, and the elements of these four tensors are about a quarter of the original tensors. Solving these four small tensors can not
-
The Lawson-Hanson algorithm with deviation maximization: Finite convergence and sparse recovery Numer. Linear Algebra Appl. (IF 4.3) Pub Date : 2023-01-13 Monica Dessole, Marco Dell'Orto, Fabio Marcuzzi
The Lawson-Hanson with Deviation Maximization (LHDM) method is a block algorithm for the solution of NonNegative Least Squares (NNLS) problems. In this work we devise an improved version of LHDM and we show that it terminates in a finite number of steps, unlike the previous version, originally developed for a special class of matrices. Moreover, we are concerned with finding sparse solutions of underdetermined