当前位置: X-MOL 学术Phys. Rev. A › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Training saturation in layerwise quantum approximate optimization
Physical Review A ( IF 2.9 ) Pub Date : 2021-09-15 , DOI: 10.1103/physreva.104.l030401
E. Campos , D. Rabinovich , V. Akshay , J. Biamonte

The quantum approximate optimization algorithm (QAOA) is the most studied gate-based variational quantum algorithm today. We train QAOA one layer at a time to maximize overlap with an n qubit target state. Doing so we discovered that such training always saturates—called training saturation—at some depth p*, meaning that past a certain depth, overlap cannot be improved by adding subsequent layers. We formulate necessary conditions for saturation. Numerically, we find layerwise QAOA reaches its maximum overlap at depth p*=n for the problem of state preparation. The addition of coherent dephasing errors to training removes saturation, recovering robustness to layerwise training. This study sheds new light on the performance limitations and prospects of QAOA.

中文翻译:

分层量子近似优化中的训练饱和

量子近似优化算法(QAOA)是当今研究最多的基于门的变分量子算法。我们一次训练一层 QAOA 以最大化与n量子位目标状态。这样做我们发现这样的训练总是在某个深度饱和——称为训练饱和*,这意味着超过一定深度后,无法通过添加后续层来改善重叠。我们制定了饱和的必要条件。在数值上,我们发现分层 QAOA 在深度处达到最大重叠*=n对于状态准备问题。在训练中加入相干移相误差可以消除饱和,恢复对分层训练的鲁棒性。这项研究揭示了 QAOA 的性能局限性和前景。
更新日期:2021-09-15
down
wechat
bug