当前位置: X-MOL 学术Phys. Rev. E › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Modified conjugate gradient method for diagonalizing large matrices.
Physical Review E ( IF 2.2 ) Pub Date : 2003-12-20 , DOI: 10.1103/physreve.68.056706
Quanlin Jie 1 , Dunhuan Liu
Affiliation  

We present an iterative method to diagonalize large matrices. The basic idea is the same as the conjugate gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroducing errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trial vectors, play a similar role as the conjugate gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitable for first principle calculations.

中文翻译:

修正的共轭梯度法,用于对角化大型矩阵。

我们提出了一种对角化大型矩阵的迭代方法。基本思想与共轭梯度(CG)方法相同,即通过其梯度将瑞利商最小化,并避免将误差重新引入先前梯度的方向。每个迭代步骤都是在当前试验向量和瑞利商的相应梯度以及一些以前的试验向量所跨越的子空间中找到矩阵的最低特征向量。该梯度与先前的试验矢量一起,起到与原始CG算法的共轭梯度相似的作用。我们的数值测试表明,该方法的收敛速度比原始CG方法快得多。一个迭代步骤的计算成本与原始CG方法大致相同。它适用于第一性原理计算。
更新日期:2019-11-01
down
wechat
bug