当前位置: X-MOL 学术Phys. Rev. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Error-Robust Quantum Logic Optimization Using a Cloud Quantum Computer Interface
Physical Review Applied ( IF 4.6 ) Pub Date : 2021-06-22 , DOI: 10.1103/physrevapplied.15.064054
Andre R. R. Carvalho , Harrison Ball , Michael J. Biercuk , Michael R. Hush , Felix Thomsen

We describe an experimental effort designing and deploying error-robust single-qubit operations using a cloud-based quantum computer and analog-layer programming access. We design numerically optimized pulses that implement target operations and exhibit robustness to various error processes including dephasing noise, instabilities in control amplitudes, and crosstalk. Pulse optimization is performed using a flexible optimization package incorporating a device model and physically relevant constraints (e.g., bandwidth limits on the transmission lines of the dilution refrigerator housing IBM Quantum hardware). We present techniques for conversion and calibration of physical Hamiltonian definitions to pulse waveforms programmed via Qiskit Pulse and compare performance against hardware default derivative removal by adiabatic gate (DRAG) pulses on a five-qubit device. Experimental measurements reveal default DRAG pulses exhibit coherent errors an order of magnitude larger than tabulated randomized-benchmarking measurements; solutions designed to be robust against these errors outperform hardware-default pulses for all qubits across multiple metrics. Experimental measurements demonstrate performance enhancements up to: about a 10 times single-qubit gate coherent-error reduction; about a 5 times average coherent-error reduction across a five-qubit system; about a 10 times increase in calibration window to one week of valid pulse calibration; about a 12 times reduction gate-error variability across qubits and over time; and up to about a 9 times reduction in single-qubit gate error (including crosstalk) in the presence of fully parallelized operations. Randomized benchmarking reveals error rates for Clifford gates constructed from optimized pulses consistent with tabulated T1 limits, and demonstrates a narrowing of the distribution of outcomes over randomizations associated with suppression of coherent errors.

中文翻译:

使用云量子计算机接口的抗差错量子逻辑优化

我们描述了使用基于云的量子计算机和模拟层编程访问来设计和部署错误鲁棒性单量子位操作的实验工作。我们设计了数字优化的脉冲,这些脉冲实现了目标操作,并对各种错误过程表现出鲁棒性,包括移相噪声、控制幅度的不稳定性和串扰。脉冲优化是使用一个灵活的优化包来执行的,该优化包包含一个设备模型和物理相关的约束(例如,IBM Quantum 硬件稀释冰箱的传输线的带宽限制)。我们提出了将物理哈密顿量定义转换和校准为通过 Qiskit Pulse 编程的脉冲波形的技术,并将性能与在五量子位设备上通过绝热门 (DRAG) 脉冲去除硬件默认导数进行比较。实验测量显示默认 DRAG 脉冲表现出的相干误差比列表随机基准测量大一个数量级;对于跨多个指标的所有量子位,旨在针对这些错误具有鲁棒性的解决方案的性能优于硬件默认脉冲。实验测量表明性能提升高达:对于跨多个指标的所有量子位,旨在针对这些错误具有鲁棒性的解决方案的性能优于硬件默认脉冲。实验测量表明性能提升高达:对于跨多个指标的所有量子位,旨在针对这些错误具有鲁棒性的解决方案的性能优于硬件默认脉冲。实验测量表明性能提升高达:10次单量子位门相干误差减少;关于5五量子位系统中平均相干误差减少的倍数;关于10校准窗口增加到一周的有效脉冲校准;关于12时间减少跨量子比特和随时间推移的门错误变异性;最多大约一个9在存在完全并行化操作的情况下,单量子位门错误(包括串扰)减少了数倍。随机基准测试揭示了由与列表一致的优化脉冲构建的 Clifford 门的错误率1 限制,并证明了与抑制相干误差相关的随机化结果分布的缩小。
更新日期:2021-06-23
down
wechat
bug