当前位置: X-MOL 学术Phys. Rev. Research › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Avoiding local minima in variational quantum eigensolvers with the natural gradient optimizer
Physical Review Research Pub Date : 2020-11-17 , DOI: 10.1103/physrevresearch.2.043246
David Wierichs , Christian Gogolin , Michael Kastoryano

We compare the bfgs optimizer, adam and NatGrad in the context of vqes. We systematically analyze their performance on the qaqa ansatz for the transverse field Ising and the XXZ model as well as on overparametrized circuits with the ability to break the symmetry of the Hamiltonian. The bfgs algorithm is frequently unable to find a global minimum for systems beyond about 20 spins and adam easily gets trapped in local minima or exhibits infeasible optimization durations. NatGrad on the other hand shows stable performance on all considered system sizes, rewarding its higher cost per epoch with reliability and competitive total run times. In sharp contrast to most classical gradient-based learning, the performance of all optimizers decreases upon seemingly benign overparametrization of the ansatz class, with bfgs and adam failing more often and more severely than NatGrad. This does not only stress the necessity for good ansatz circuits but also means that overparametrization, an established remedy for avoiding local minima in machine learning, does not seem to be a viable option in the context of vqes. The behavior in both investigated spin chains is similar, in particular the problems of bfgs and adam surface in both systems, even though their effective Hilbert space dimensions differ significantly. Overall our observations stress the importance of avoiding redundant degrees of freedom in ansatz circuits and to put established optimization algorithms and attached heuristics to test on larger system sizes. Natural gradient descent emerges as a promising choice to optimize large vqes.

中文翻译:

使用自然梯度优化器避免变分量子本征求解器中的局部最小值

我们在vqes的上下文中比较了bfgs优化器,adamNatGrad。我们系统地分析了它们在横向场Ising和XXZ模型的qaqa ansatz上的性能,以及在能够打破哈密顿对称性的超参数化电路上的性能。该BFGS算法常常无法找到系统的全球最小的超过20旋转和亚当容易陷入局部最小值或显示不可行的优化持续时间。另一方面,NatGrad在所有考虑的系统尺寸上均表现出稳定的性能,并因其可靠性和竞争性总运行时间而提高了其每纪元的成本。与大多数经典的基于梯度的学习形成鲜明对比的是,所有优化器的性能都会因ansatz类的看似良性的过参数化而降低,bfgsadam的失效比NatGrad更为频繁和严重。这不仅强调了良好的ansatz电路的必要性,而且还意味着过度参数化(一种避免在机器学习中避免局部最小值的既定补救措施)在vqes上下文中似乎不是可行的选择。。尽管这两个自旋链的有效希尔伯特空间尺寸显着不同,但它们在两个自旋链中的行为都相似,特别是在两个系统中的bfgs亚当表面的问题。总体而言,我们的观察结果强调了避免在ansatz电路中避免多余的自由度,并采用既定的优化算法和附带的启发式方法来测试较大系统尺寸的重要性。自然梯度下降是优化大型vqes的有前途的选择。
更新日期:2020-11-17
down
wechat
bug